Connect with us

Tech News

The Great Deception: How Anthropic, DeepSeek, Moonshot, and MiniMax Used 24,000 Fake Accounts to Rip Off Claude

Published

on

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

Anthropic, a San Francisco-based company, made headlines in the artificial intelligence industry by accusing three prominent Chinese AI laboratories – DeepSeek, Moonshot AI, and MiniMax – of engaging in large-scale campaigns to extract capabilities from its Claude models using fraudulent accounts. These campaigns involved over 16 million exchanges with Claude through fake accounts, violating Anthropic’s terms of service and regional access restrictions. This practice, known as distillation, allows foreign competitors to leapfrog years of research and investment by using outputs from a larger AI model to train a smaller, more efficient one.

The tension between American and Chinese AI developers has been escalating, especially as Washington debates whether to tighten or loosen export controls on advanced AI training chips. Anthropic has been vocal about restricting chip sales to China, connecting Monday’s revelations to this policy fight.

Distillation, once an academic curiosity, has now become a crucial issue in the global AI race. It involves creating a smaller AI model by extracting knowledge from a larger one. While distillation is a legitimate training method used by many AI labs, it can also be exploited by competitors to capture capabilities without the investment required for research and development.

The disclosure by Anthropic reveals the deliberate and large-scale extraction of intellectual property by Chinese labs, targeting specific capabilities of Claude. DeepSeek, Moonshot AI, and MiniMax conducted sophisticated operations to extract features like reasoning, tool use, and coding. These labs used fraudulent accounts and coordinated traffic to maximize throughput while evading detection.

Anthropic’s decision to frame this issue as a national security crisis rather than a legal dispute reflects the challenges in enforcing intellectual property laws against distillation. While copyright claims may be difficult to prove in this context, contractual violations are more straightforward. However, enforcing these terms against foreign entities operating through proxy services presents significant challenges.

See also  Embracing the Future: Shadow AI and the Evolution of Work

In response to these attacks, Anthropic has implemented defensive measures and called for industry-wide cooperation. The implications of these revelations extend beyond the AI industry, influencing ongoing policy debates and shaping the future of API security.

Overall, Anthropic’s disclosure sheds light on the widespread and sophisticated nature of distillation attacks and underscores the importance of API security in the AI industry. Whether this evidence leads to a coordinated response or accelerates an arms race between attackers and defenders remains to be seen. Washington’s response will be crucial in determining how the industry addresses this evolving threat landscape.

Trending