AI workers versus the Pentagon: why Anthropic’s fight is really about power
The clash between Anthropic and the US Department of Defense is not just another procurement dispute. It is one of the first open tests of whether large AI labs can enforce their own red lines when a powerful state actor pushes back – and whether employees at rival firms will side with ethics over corporate advantage. For European readers, this is also a preview of battles that will arrive here once foundation models sit at the core of defence and security systems. In this piece, we unpack what happened, why employees at OpenAI and Google are taking the risk of intervening, and what this signals for the future balance of power between governments, tech giants and their own staff.
The news in brief
According to TechCrunch, more than 30 employees from OpenAI and Google DeepMind filed an amicus brief on Monday in support of Anthropic’s lawsuit against the US Department of Defense (DoD). The move came after the Pentagon formally designated Anthropic as a supply‑chain risk – a label usually applied to foreign adversaries or untrusted vendors.
TechCrunch reports that this designation followed Anthropic’s refusal to allow its AI systems to be used for mass surveillance of US citizens or for autonomously firing weapons. The DoD, for its part, argued it should be able to use contracted AI systems for any purpose it considers lawful.
The filing, which includes senior figures such as Google DeepMind’s chief scientist Jeff Dean, argues that the government’s move is an abuse of power that will chill open debate about AI risks. On the same day it blacklisted Anthropic, the Pentagon reportedly signed a fresh deal with OpenAI, a step that triggered protests among some OpenAI employees.
Why this matters: power, precedent and pressure
Anthropic’s dispute with the Pentagon is about much more than one contract. It goes to the heart of who gets to decide the limits of high‑stakes AI deployment: the state, the vendor, or the people building the systems.
The immediate loser is Anthropic, which now carries a stigma associated in US procurement circles with companies like Huawei: a theoretical security liability. That label could lock Anthropic out of a wide range of US government deals, not only with the military. From Washington’s perspective, the designation is leverage – a way of signalling to the entire AI industry that refusing certain classes of military use can carry costs.
But there are other, less obvious losers. OpenAI and Google leadership now find themselves squeezed between governments hungry for advanced AI and a workforce that has grown far more willing to rebel over ethical red lines. By publicly siding with a direct competitor, their own employees are broadcasting that loyalty to shared values matters more than company market share.
The employees’ brief also highlights a structural problem: in the absence of robust public law on military AI, the only real guardrails are contractual clauses, technical restrictions and internal policies. If a government can retaliate against a vendor for using those tools to say no, the message to the rest of the industry is clear: comply quietly, or risk being frozen out.
In practice, that could tilt the competitive landscape in favour of players willing to offer governments near‑blanket usage rights, and against firms that try to build reputations around safety and constrained deployment. It might also accelerate a trend we already see in AI safety circles: talented researchers gravitating toward organisations perceived as having a backbone against weaponisation.
The bigger picture: from Project Maven to foundation models
This episode does not come out of nowhere. It fits into almost a decade of growing tension between Big Tech and the US national security establishment.
In 2018, Google engineers famously pushed back against Project Maven, a Pentagon programme using AI to analyse drone footage. Hundreds signed petitions; some resigned. Later, Microsoft and Amazon employees objected to cloud and AR contracts with the US military and immigration authorities. Each time, management tried to reassure staff while largely keeping the contracts.
What has changed is the centrality of foundation models. Unlike narrow computer vision or cloud hosting, systems like Claude, ChatGPT and Gemini are general‑purpose capabilities. Once delivered into a defence environment with broad usage rights, they can be re‑purposed in ways far beyond what the original development team imagined.
There is also a historical echo in the choice of label. Calling Anthropic a supply‑chain risk is reminiscent of how US authorities treated some Chinese telecom vendors in the 2010s. Back then, the concern was foreign control of critical infrastructure. Now, the fear seems to be that a domestic supplier might refuse to bend to state priorities.
Competitively, this creates a new axis. AI companies are no longer only compared on model quality, pricing or cloud integration, but also on how far they will go in resisting or accommodating powerful government clients. Some, like Palantir, have built their brand on close alignment with defence and intelligence. Others, like Anthropic, have pitched themselves as safety‑first labs. The Pentagon’s move is a reminder that Washington can reshuffle the playing field with a stroke of a pen.
More broadly, this is part of a global pattern. In China, large model providers are tightly aligned with state objectives. In the US and Europe, governments are trying to influence vendors through procurement, regulation and informal pressure. The Anthropic case is an early, very visible skirmish over how much independence private AI labs will actually enjoy.
The European angle: ethics by law, not by contract
For Europe, this dispute is a warning and an opportunity.
On paper, the EU AI Act deliberately carves out military and national security use from its core scope. Member states did not want Brussels telling their defence ministries what AI they can or cannot deploy. But the same law imposes obligations on general‑purpose models and high‑risk systems, many of which will inevitably end up in dual‑use or defence‑adjacent settings.
The Anthropic case shows what happens when ethical limits on use exist only in private contracts or internal policies. They are fragile. A powerful client can try to override them, and there is no clear legal framework backing the vendor’s stance. For European AI firms – from Berlin and Paris to Ljubljana and Zagreb – this underlines the need to embed usage restrictions explicitly in law and regulation, not just in terms and conditions.
It also raises awkward questions about dependence on US suppliers. If Washington is ready to pressure its own AI champions in this way, how much leverage would a European ministry have in a dispute over surveillance or autonomous targeting? And how comfortable should EU institutions be building critical capabilities on top of foreign models whose behaviour may be shaped by US security doctrine?
Europe’s privacy‑conscious culture and legal framework – from the GDPR to the Digital Services Act – provide a different starting point. Civil society campaigns against lethal autonomous weapons have been strong across the continent. In that context, the spectacle of US AI workers publicly defending a competitor’s right to say no to mass surveillance and autonomous weapons will resonate with many European engineers and policymakers.
Looking ahead: what to watch next
The immediate legal path will likely be slow and technical, but the political fallout will move faster.
US courts will have to decide whether the Pentagon followed proper process and had a rational basis for labelling Anthropic a supply‑chain risk. Even if judges ultimately side with the government, the mere discovery process could surface uncomfortable details about how the decision was made and how quickly the OpenAI deal followed.
Outside the courtroom, the key question is whether this incident deters or galvanises other AI labs. If Anthropic suffers long‑term commercial damage without any policy change, risk‑averse boards may decide that explicit red lines on military use are not worth the trouble. If, however, employee activism at multiple companies forces management and politicians to reconsider, we could see a new equilibrium where some classes of use – such as fully autonomous weapons – become commercially toxic in liberal democracies.
For European readers, watch three things over the next 12 to 24 months:
- Whether major EU and UK defence procurements start to include clear prohibitions on certain applications of AI, rather than leaving everything to vague lawful‑purpose clauses.
- How the European Commission and national regulators interpret the AI Act for general‑purpose models that may be fine‑tuned for military support roles.
- Whether European engineers follow their US counterparts in using open letters, walkouts and amicus briefs as tools to challenge controversial deals.
If AI workers become a de‑facto fourth branch of power – alongside legislators, regulators and corporate boards – governments on both sides of the Atlantic will have to rethink how they engage with the companies that build critical models.
The bottom line
Anthropic’s confrontation with the Pentagon is the first major test of whether an AI lab can enforce ethical red lines against its most powerful potential customer. The decision by OpenAI and Google employees to publicly back a rival shows that, in this industry, labour is emerging as a real counterweight to both corporations and the state. Whether courts side with Anthropic or not, the message to Europe is clear: if we want trustworthy AI in defence and security, we cannot rely on quiet contracts and good intentions alone. Would you, as a developer or policymaker, be ready to walk away from a lucrative deal for the sake of those limits?



