When a sitting US president orders the entire federal government to drop one of the world’s leading AI labs, it’s not just a procurement story. It’s a signal about who will write the rules for military AI: democratic institutions, private labs, or whoever shouts loudest on social media. The clash between Donald Trump’s administration and Anthropic is less about today’s use cases and more about who gets to draw tomorrow’s red lines. And that should concern anyone in Europe relying on US AI infrastructure for critical systems.
The news in brief
According to Ars Technica, citing reporting from WIRED, US president Donald Trump has directed all federal agencies to stop using Anthropic’s AI systems. The order, announced on Trump’s Truth Social account, includes a six‑month phase‑out period for existing deployments, leaving room for potential renegotiation.
The move follows weeks of dispute between the Pentagon and Anthropic over contract language. The US Department of Defense reportedly pushed to revise a 2025 agreement so that Anthropic’s models could be used for “all lawful” military purposes, removing explicit restrictions on certain applications. Anthropic resisted, warning that such wording could open the door to fully autonomous lethal weapons or large‑scale domestic surveillance.
Anthropic currently provides customised Claude Gov models to the Pentagon under a deal worth around $200 million, as reported by WIRED, including work on classified systems via Amazon’s and Palantir’s platforms. Other labs—Google, OpenAI and xAI—have signed similar defence agreements, but Anthropic is the only one integrated into classified environments so far.
The public conflict escalated after Axios revealed that US officials had used Anthropic’s AI to help plan an operation targeting Venezuela’s president Nicolás Maduro. Officials and Anthropic exchanged public criticism, and defence secretary Pete Hegseth reportedly gave Anthropic a deadline to accept the “all lawful use” clause. The White House has now upped the stakes by threatening a full government ban.
Why this matters
This is not just a fight over a contract clause. It’s a power struggle over who gets to define acceptable behaviour for AI in war.
In the short term, Anthropic is the obvious loser. A federal ban threatens a lucrative, high‑prestige revenue stream and complicates its positioning as a serious infrastructure provider. Competitors that have already softened their military restrictions—like Google and OpenAI—may see an opening to grow their defence footprint.
But the deeper risk sits with governments and, ultimately, citizens. If “all lawful use” becomes the standard template, military AI ethics effectively get outsourced to whatever legislatures and courts have happened to codify so far. In most countries, detailed rules for autonomous weapons and AI‑driven surveillance simply don’t exist yet. A vacuum in law plus a blanket “all lawful” clause is an invitation to push boundaries until something breaks—or until scandal forces belated regulation.
Anthropic, for all its flaws, is at least trying to embed technical and contractual brakes on some of the scarier use cases. Its stance has already encouraged internal resistance across the sector: hundreds of workers at OpenAI and Google reportedly signed an open letter supporting Anthropic’s position and criticising their own employers’ concessions on military use. Trump’s ban sends those employees, and any safety‑minded executives, a clear message: challenge the security establishment and you’ll be punished.
The result could be a chilling effect on voluntary safety norms. Instead of labs competing on who has the most robust guardrails for sensitive domains, they may feel pressure to compete on who is the most “flexible” partner to the military and intelligence community. That’s exactly the race to the bottom many AI governance experts have warned about.
The bigger picture
The Anthropic clash sits at the crossroads of three powerful trends: the militarisation of commercial AI, the politicisation of tech firms, and the slow pace of formal regulation.
First, Silicon Valley’s relationship with defence has flipped. A decade ago, Google employees rebelled against Project Maven, an early Pentagon AI initiative, and the company retreated from the contract. Today, major labs actively court defence work; the question is no longer whether to work with the military, but on what terms. Anthropic was actually the first big lab to sign a major Pentagon deal, including for classified environments—a symbolic breakthrough. Trump’s ban weaponises that dependence.
Second, this is part of a broader pattern of political leaders using access to critical tech as leverage. Washington has already restricted Huawei and advanced chip exports to China; now we’re watching a similar logic applied inward, against a US company that resists political demands. It blurs the line between legitimate national‑security oversight and raw political muscle.
Third, the legal framework is far behind the technology. There is no comprehensive US statute governing autonomous weapons, nor a global treaty covering AI in command‑and‑control systems. In this vacuum, private labs have tried to fill the gap with voluntary “red lines”: no fully autonomous lethal systems, no mass dragnet surveillance. Trump’s Pentagon appears determined to replace those private red lines with a single, vague standard: if it’s legal, it’s fair game.
We’ve seen versions of this movie before. The battles over encryption backdoors, cloud contracts like JEDI, and content moderation fights with social platforms all revolved around the same question: when tech infrastructure becomes strategically critical, how much autonomy do private companies really have? Anthropic is just the latest test case—and AI, with its dual‑use nature and opacity, raises the stakes significantly.
The European angle
For Europe, this dispute is more than distant Washington drama.
First, it highlights Europe’s dependence on a small number of US AI vendors. European defence ministries, intelligence agencies and critical infrastructure operators increasingly rely on US cloud and AI platforms—often via the same intermediaries named in this saga, like Amazon and Palantir. If a single US political decision can knock a major provider out of government use, European customers should assume the same could happen to them, directly or indirectly.
Second, the clash exposes a regulatory gap on this side of the Atlantic. The EU AI Act, agreed politically in 2023, formally excludes most military uses from its scope. That means Brussels has strong opinions on AI for credit scoring or recruitment, but almost nothing binding to say about battlefield autonomy or AI‑driven targeting. Member states are left to set their own defence rules, and they are under pressure from NATO to accelerate AI adoption.
Trump’s ultimatum to Anthropic should be a wake‑up call: if Europe wants “trustworthy AI” in defence, it cannot simply rely on US labs voluntarily holding the line. It will need its own norms and, eventually, its own specialised models. There are early efforts—from Franco‑German defence AI projects to smaller ecosystems in places like Berlin, Paris, Ljubljana or Zagreb—but nothing yet that rivals US capabilities.
Third, there’s a cultural and political angle. European publics are generally more sceptical of mass surveillance and lethal autonomy than US voters, and courts in Germany or the Netherlands have been willing to rein in security policies. If US labs are pushed to quietly drop their own ethical constraints, European governments may either import technologies that clash with domestic values or get locked out of cutting‑edge capabilities altogether.
Looking ahead
The six‑month phase‑out is a negotiation device as much as a policy. Three scenarios look plausible.
Quiet compromise. Behind closed doors, Anthropic and the Pentagon find a formula that satisfies both sides—perhaps keeping explicit bans on fully autonomous lethal use and domestic mass surveillance, while granting wider flexibility elsewhere and stronger government oversight. Trump can still claim victory; Anthropic keeps its contract; the symbolic “ban” quietly evaporates.
Punitive follow‑through. The administration actually forces agencies to rip Anthropic out of their stacks. That would send a brutal signal to the rest of the industry: align with political demands or lose government business. Expect rapid convergence towards the “all lawful use” language and a deeper exodus of idealistic talent from defence‑linked AI teams.
Fragmentation and balkanisation. If the dispute drags on publicly, it could accelerate a split between “sovereign” and “global” AI stacks. Governments—especially in Europe and parts of Asia—may double down on building their own controlled models for defence and intelligence, reducing reliance on US commercial labs. That would be noisy and expensive, but strategically understandable.
Watch for three indicators over the coming months: whether other labs publicly reaffirm Anthropic‑style red lines or stay quiet; whether US lawmakers move to clarify legal limits on military AI; and whether European governments start talking more concretely about defence‑focused AI standards beyond the AI Act.
The biggest unanswered question is whether any major AI vendor will be willing to walk away from state security contracts on principle. If even Anthropic ultimately gives in, the message to the market will be clear: values are negotiable, revenue is not.
The bottom line
Trump’s move against Anthropic is less about one company and more about establishing who sets the boundaries for military AI. Allowing “all lawful use” to become the default would lock in today’s regulatory vacuum as tomorrow’s strategic norm. For Europe, which depends heavily on US AI infrastructure yet aspires to higher ethical standards, this is a warning shot: build your own guardrails—or someone else will decide where the red lines go. The open question is whether any major AI lab is willing to pay a real price to defend its principles.



