Anthropic vs. DeepSeek: Distillation Wars Reveal AI’s Next Battleground

February 23, 2026
5 min read
Abstract illustration of AI data flows between US and Chinese laboratories

1. Headline & intro

The AI race just crossed an invisible line. Anthropic is publicly accusing three fast‑rising Chinese labs of systematically “mining” its Claude models — not by hacking weights, but by pumping millions of prompts through its API and training their own systems on the answers.

This is more than a spat between model vendors. It’s a preview of how intellectual property, national security and cloud business models collide in the foundation‑model era. In this piece, we’ll unpack what Anthropic says happened, why it’s leaning on the geopolitics of chip exports, and what this new class of “distillation attacks” means for Europe, open‑source AI and anyone building on commercial APIs.

2. The news in brief

According to TechCrunch, Anthropic claims three Chinese AI firms — DeepSeek, Moonshot AI and MiniMax — created over 24,000 fake user accounts to query its Claude models at massive scale. Anthropic says those accounts generated more than 16 million interactions designed for “distillation”: using a stronger model’s outputs to train a competitor.

The company alleges each lab focused on different capabilities, including reasoning, tool use, coding, data analysis and computer‑use agents. Anthropic says one lab redirected roughly half its traffic to Claude immediately after a new version launched, apparently to copy its latest strengths.

These accusations surface while Washington debates how strictly to enforce export controls on advanced AI chips. TechCrunch reports that the Trump administration recently allowed firms like Nvidia to resume exporting high‑end accelerators such as the H200 to China. Anthropic argues that the scale of the alleged distillation requires access to those advanced chips and is using the case to defend tighter controls.

3. Why this matters: API business meets grey‑zone IP

What Anthropic is calling a “distillation attack” sits in an uncomfortable grey area. Technically, the Chinese labs appear to have used paid or at least legitimate access to Claude via thousands of accounts, then trained their own models on the responses. That’s not model theft in the classical sense — no weights were exfiltrated — but it is an attempt to wholesale reproduce Anthropic’s differentiation while offloading safety and R&D costs onto a rival.

Who benefits? In the short term, aggressive distillers like DeepSeek gain a huge time and cost advantage. Instead of spending years to discover techniques for reasoning, alignment or tool use, they can treat Anthropic’s API as a compressed textbook and replay it into their own models.

The obvious losers are US frontier labs whose business model depends on two assumptions: that their models are hard to copy, and that guardrails embedded in those models will meaningfully constrain misuse. Distillation undermines both. You can copy most of the capabilities while selectively discarding safety behaviours you don’t like.

There is also a broader ecosystem risk. If high‑end models can be partially cloned through scale alone, the value of frontier innovation tilts away from publishing anything publicly — including safety techniques — and towards secrecy. That’s bad for independent research, for open‑source communities and arguably for global safety.

Anthropic’s public move is therefore less about one Chinese rival and more about drawing a line: mass‑scale API farming by state‑aligned labs is being framed as an attack, not just “clever benchmarking.”

4. The bigger picture: from data scraping to capability extraction

We’ve seen versions of this story before. Social networks complained for years about rivals scraping their public data. Open‑source model creators routinely train on outputs from proprietary models to bootstrap performance. Search engines have been quietly doing query‑log distillation on user behaviour for decades.

What’s new is the combination of scale, geopolitics and safety. Distilling a chatbot that drafts emails is one thing; distilling a frontier‑level model that embeds bio‑risk filters and cyber‑security constraints is another.

The Anthropic episode fits into at least three broader trends:

  • Weaponisation of access controls. OpenAI has already restricted access in certain jurisdictions and tightened usage policies. Anthropic is now openly tying access to compliance and geopolitics. Access to US frontier models is becoming a strategic asset, not a neutral cloud service.
  • Model‑vs‑model competition. DeepSeek made waves with an open‑source reasoning model that challenged US labs on benchmarks at radically lower cost. If DeepSeek V4 indeed surpasses Claude and ChatGPT on coding, as TechCrunch notes is expected, the incentive for Western labs to clamp down on any perceived “free‑riding” will only grow.
  • Safety as industrial policy. By arguing that distillation strips away safeguards, Anthropic is aligning its commercial interest (harder to copy models) with Washington’s security narrative (keep advanced capabilities away from adversaries). Safety becomes a justification not just for model design choices but for export controls and trade policy.

In other words, the frontier AI race is shifting from a pure “who has the best model?” competition to a messy struggle over who controls access, data flows and compute.

5. The European angle: caught between walls and silos

For Europe, this dispute is not a distant US–China drama; it directly touches the continent’s strategic vulnerabilities. European companies and public institutions increasingly rely on US foundation models — Anthropic, OpenAI, Google, xAI — accessed via cloud APIs largely operated from US data centres. If those providers start segmenting access based on geopolitics and perceived security risk, European users can easily end up collateral damage.

The EU AI Act, GDPR, the Digital Services Act and the forthcoming product liability rules already push providers towards heavy transparency, logging and safety obligations in Europe. If Anthropic and peers now also deploy aggressive anti‑distillation detection — deep traffic analysis, tighter KYC, anomalous‑usage flags — European customers will feel that compliance overhead first.

At the same time, this rift arguably opens a window for European contenders such as Mistral AI, Aleph Alpha, Stability and regional cloud providers to differentiate on sovereignty: “European‑built models, trained in Europe, governed by EU rules.” For highly regulated sectors, that message will resonate.

But there is a risk that Europe again becomes primarily a rule‑setter and consumer, not a true AI power. If US–China tensions harden around compute exports and API access, and Europe lacks its own competitive frontier stack, Brussels may find itself regulating an ecosystem whose core strategic levers sit elsewhere.

For European policymakers, the Anthropic case is a signal that export controls and AI safety can no longer be treated as purely American debates; they shape Europe’s room for manoeuvre on everything from digital sovereignty to industrial policy.

6. Looking ahead: from quiet blocking to open fragmentation

Anthropic’s blog and comments to TechCrunch signal a clear direction of travel.

Technically, expect a rapid escalation of defensive measures:

  • Stricter identity checks for high‑volume API use, especially from high‑risk jurisdictions and cloud IP ranges.
  • Behavioural fingerprinting of traffic patterns that look like structured distillation: systematic sweeps of capability spaces, repetitive templated queries, and large‑batch evaluation runs.
  • Possibly even legal clauses that explicitly classify large‑scale distillation as prohibited “model extraction,” opening the door to litigation.

Politically, this case will be used in Washington as fresh evidence that loosening chip exports is short‑sighted. If US chips indirectly power the copying of US models, export‑control hawks have a simple talking point. Don’t be surprised if calls grow for fine‑grained export regimes: not just “which chips go to which country,” but “which customers and which use cases get which level of compute.”

Commercially, we should expect more fragmentation. Some labs will double down on closed APIs, heavy monitoring and regional access rules. Others — including parts of the open‑source ecosystem — will position themselves as more permissive alternatives, implicitly accepting that their models may be distilled but betting on community and speed instead.

The unresolved questions are uncomfortable: Where exactly is the legal line between legitimate use of an API and illicit distillation? Should safety concerns give model owners special IP‑like protections over their outputs? And who arbitrates when these disputes cross borders?

Whatever answers emerge, they will shape not just Anthropic and DeepSeek, but the next decade of how AI capabilities flow — or don’t — across the world.

7. The bottom line

Anthropic’s accusations mark distillation as the next major battleground in AI — a place where IP law, export controls and safety rhetoric collide. Treating large‑scale capability extraction as an “attack” may be justified, but it also accelerates the balkanisation of the AI ecosystem into tightly controlled blocs.

For developers, startups and policymakers in Europe and beyond, the real question is simple: in a world where even API calls are geopolitical, how much strategic dependence on a handful of frontier labs is still acceptable?

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.