Anthropic vs. OpenAI: When ‘AI Safety’ Meets the Pentagon

March 4, 2026
5 min read
Illustration of rival AI companies facing each other over a stylized Pentagon building

Anthropic vs. OpenAI: When ‘AI Safety’ Meets the Pentagon

The AI safety debate just stopped being theoretical and turned into a procurement fight. Anthropic walked away from an expanded deal with the U.S. Department of Defense (DoD) over how its models could be used; OpenAI stepped in and signed. Now Anthropic’s CEO Dario Amodei is accusing Sam Altman and OpenAI of misrepresenting what’s actually in that contract. This is no longer only about whose model is smarter. It’s about who the public, regulators and talent believe when AI systems are wired directly into military and surveillance infrastructure — and whether “all lawful uses” is a guardrail or a loophole.

The news in brief

According to TechCrunch, citing an internal memo obtained by The Information, Anthropic CEO Dario Amodei sharply criticized OpenAI’s new contract with the U.S. Department of Defense.

Anthropic already has a roughly $200 million agreement with the Pentagon, but recent talks about expanding access reportedly collapsed when the DoD demanded essentially unrestricted use of Anthropic’s AI systems for any use allowed under U.S. law. Anthropic pushed for explicit bans on domestic mass surveillance and autonomous weapons; no deal was reached.

The DoD then struck a separate agreement with OpenAI. In a blog post, OpenAI said its systems can be used for “all lawful purposes,” while claiming that activities like large‑scale domestic spying are outside that definition and were explicitly ruled out in the contract.

Amodei, in his memo, argued that OpenAI’s public messaging about these protections is misleading and primarily aimed at calming internal employee concerns. TechCrunch also notes that after the DoD announcement, estimated ChatGPT uninstalls spiked and Anthropic’s app briefly climbed near the top of the App Store charts.

Why this matters

This clash is not just corporate trash‑talk; it goes to the heart of AI’s legitimacy.

First, the dispute exposes how elastic phrases like “all lawful uses” really are. Laws shift, interpretations shift faster, and classified legal opinions can quietly redefine what counts as “lawful.” If OpenAI’s real red lines are defined by current U.S. law rather than by its own independent policy, then ultimately the Pentagon’s lawyers, not OpenAI’s safety team, are in the driver’s seat.

Anthropic is trying to draw a different line: “we ship or we walk.” By refusing a contract where those red lines are not written in concrete, it is betting that long‑term trust with users, employees and regulators is worth more than short‑term defense revenue. In the near term, Anthropic gains moral high ground and a powerful brand narrative; OpenAI gains cash, influence and deep entanglement with the U.S. security state.

Second, this is an employee‑retention story. Both companies are full of researchers who genuinely worry about misuse of frontier models. If staff conclude that leadership is shading the truth about military work, defections will follow. The fact that Amodei’s memo explicitly talks about OpenAI employees as an audience shows how central talent politics have become.

The immediate loser is the idea that “AI safety” is a neutral, technical discipline. This episode makes clear it is also a bargaining chip in high‑stakes government deals — and different labs are willing to cash it in at very different prices.

The bigger picture

We have seen this movie before, just with weaker AI. When Google employees revolted against Project Maven, a Pentagon drone‑imagery initiative, the company backed away — only for other vendors, from smaller defense startups to long‑time contractors, to step in. The military’s demand signal doesn’t vanish; it simply routes around whichever company claims the moral veto.

The difference in 2026 is that general‑purpose AI systems are now flexible enough to sit in the critical path of targeting, intelligence analysis and information operations, not just back‑office logistics. That massively raises the stakes of every API key the Pentagon gets.

At the same time, we’re seeing the rise of explicit “defense‑first” AI players – think Palantir, Anduril and a swarm of NATO‑adjacent startups – who wear their military alignment as a badge of honor. Anthropic and OpenAI have tried to occupy a middle ground: consumer and enterprise products, plus selective government work under the banner of “responsible use.” This dispute suggests that middle ground is collapsing.

OpenAI’s decision aligns it more closely with classic Big Tech patterns: say yes to the state, add some policy language about oversight, and trust that public outrage will fade. Anthropic is experimenting with a rarer stance: accepting some government money but declining when the terms clash with stated safety principles.

Historically, the companies that win sensitive infrastructure markets are not always the ones with the best technology, but those with the deepest political integration. If OpenAI becomes the default “national champion” for U.S. AI in defense, that could reshape standards, export controls and even de facto global norms. Anthropic’s bet is that an alternative, more constrained model of partnership will eventually be demanded by allies, regulators and citizens who do not want a single U.S. lab acting as an uncritical extension of the Pentagon.

The European / regional angle

For Europe, this fight lands in the middle of a regulatory identity crisis. The EU AI Act, as politically agreed, largely carves out military uses from its scope, on the theory that defense is a national competence. Yet EU citizens are among the most skeptical in the world about surveillance and automated warfare, and GDPR plus the Digital Services Act already restrict how data and platforms can be misused.

When European governments and corporations buy access to U.S. frontier models, they are implicitly buying into the provider’s governance model. If OpenAI is comfortable accepting a broad “all lawful uses” clause with the U.S. Department of Defense, why would it accept stricter limits for a European ministry or police agency unless compelled by law? Conversely, Anthropic’s stance is much closer to the precautionary rhetoric that Brussels has used for years.

This creates an opening for European labs like Mistral, Aleph Alpha or DeepMind’s London‑rooted teams to differentiate on governance, not just performance. It also hands leverage to data‑protection authorities and competition regulators: procurement rules could start to require verifiable, contractual red lines around surveillance and weaponisation, not just generic talk of ethics.

For smaller markets like those in Central and Eastern Europe, where local startups often integrate U.S. APIs into public‑sector tenders, the question becomes practical: do you want your national infrastructure built on top of a vendor that has signalled maximum flexibility to the Pentagon, or one that has proven willing to forgo revenue over use‑case limits? That is no longer an abstract ethics seminar; it is a line item in RFPs.

Looking ahead

Three trajectories are worth watching.

First, the trust curve. The reported spike in ChatGPT uninstalls suggests that a segment of the public is willing to punish perceived alignment with military work, at least symbolically. Whether that sustains over months — and whether enterprises care at all — will determine how costly this deal really is to OpenAI’s brand.

Second, the talent market. If even a small but high‑leverage fraction of OpenAI researchers and engineers decide this crosses their personal red line, Anthropic and other labs will gladly hire them. Expect to see more candidates asking pointed questions about defense contracts in interviews, and more companies publishing high‑level “acceptable use” charts that look reassuring but are hard to verify.

Third, verification itself. Right now, the public has to take both Anthropic and OpenAI at their word about what is or isn’t in classified or confidential contracts. Over time, regulators — especially in the EU and possibly in U.S. congressional oversight — are likely to demand more transparency about how “lawful use” clauses interact with export controls, human‑rights law and surveillance rules.

The risk is obvious: a slow slide from clearly defensive, non‑lethal applications into more contested territory, justified step by step as still “lawful.” The opportunity is also real: labs that can credibly demonstrate enforceable limits may become partners of choice for democratic governments trying to distinguish themselves from authoritarian AI militarisation.

The bottom line

The Anthropic–OpenAI rift over the Pentagon deal is a stress test for the entire idea of “responsible AI.” If “all lawful uses” is the best the industry can offer as a safeguard, then we are effectively outsourcing AI ethics to shifting national security law. Anthropic has chosen to walk away from that; OpenAI has chosen to trust its lawyers and its spin. Users, employees, regulators — and you, as someone who builds or buys technology — now have to decide which model of AI power you are willing to live with.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.