1. Headline & intro
The same AI model Anthropic says is too dangerous for the public is reportedly now running inside the NSA. That contrast captures the next decade of AI politics in one image: highly capable systems locked behind classified doors, justified in the name of security, while regulators still argue over chatbot guardrails. In this piece, we’ll look at what the NSA’s quiet use of Mythos really signals: about the balance of power between governments and frontier AI labs, the emerging market for “classified AI,” and why Europeans should pay close attention before their own intelligence agencies line up for similar deals.
2. The news in brief
According to TechCrunch, citing Axios, the U.S. National Security Agency is using Mythos Preview, Anthropic’s recently announced frontier AI model focused on cybersecurity.
Anthropic unveiled Mythos earlier this month but declined to release it broadly, arguing that the model is powerful enough to meaningfully assist offensive cyber operations and therefore should be tightly controlled. Access was reportedly limited to about 40 organizations worldwide, with only a subset named publicly.
TechCrunch reports that the NSA is among the undisclosed customers and is primarily using Mythos to scan digital environments for security vulnerabilities. The U.K.’s AI Security Institute has also confirmed access.
This comes shortly after the U.S. Department of Defense — the NSA’s parent department — labelled Anthropic a “supply chain risk” after the company refused to give the Pentagon effectively unrestricted access to its models, including for mass domestic surveillance and autonomous weapons development. Despite that dispute, Anthropic’s ties to the current Trump administration appear to be warming, with CEO Dario Amodei recently meeting senior White House officials.
3. Why this matters
Mythos at the NSA is not just another “AI in government” story. It exposes three uncomfortable realities.
First, the national security state is perfectly willing to denounce a model as dangerously powerful in public, then adopt it in secret. The Pentagon calls Anthropic a supply‑chain risk; the NSA quietly plugs the same company’s most restricted model into its cyber tooling. The message is clear: if a capability exists and is useful for offence or defence, the security apparatus will try to get it, even while arguing in court that it is too risky for everyone else.
Second, it shows how much leverage a handful of AI labs now hold. Anthropic can say no to some Pentagon demands (domestic mass surveillance, autonomous weapons) yet still end up inside the most secretive U.S. agency. That is a remarkable shift from the classic defence‑contractor model, where the government mostly dictated terms. Here, the vendor’s internal safety policies are shaping how a superpower’s intelligence services can use a strategic technology.
Who benefits? The NSA gains cutting‑edge automated vulnerability discovery at a moment when software supply chains are growing impossibly complex. Anthropic gains prestige, political capital in Washington and a proof point that its “responsible but useful” sales narrative works in practice.
Who loses? Smaller AI security startups now compete not only with Palantir and traditional infosec vendors, but with the closed‑door cachet of a frontier model effectively blessed by the NSA. And civil society loses visibility: if offensive‑grade models migrate into classified deployments, public oversight and academic auditing become nearly impossible.
4. The bigger picture
This story sits at the intersection of three longer‑term trends.
1. The dual‑use trajectory of AI. Every major frontier model is now clearly dual‑use: it can generate secure code or malware, spot vulnerabilities or design exploits. Anthropic’s own justification for restricting Mythos — too capable for public release — is the same logic governments have long applied to zero‑day exploits or advanced cryptography. Recall the 1990s “crypto wars,” when Washington tried to treat strong encryption as munitions. We are replaying that debate with far more general‑purpose systems.
2. The nationalisation of compute and models. Governments are racing to secure domestic compute capacity and preferential access to top‑tier models. The U.S. has already imposed export controls on high‑end GPUs; China is pushing its own foundation models; the U.K. is building state‑backed AI research capacity. Mythos quietly running at the NSA is one tile in a bigger mosaic: strategic AI capability increasingly treated like nuclear tech or satellites — an asset to be hoarded, regulated and, where possible, monopolised.
3. Corporate ethics vs. state power. Anthropic has tried to draw red lines: no mass domestic surveillance, no autonomous weapons. That stance echoes earlier tensions, such as Google employees rebelling against Project Maven or Microsoft staff protesting military HoloLens contracts. The difference now is capability concentration. Saying no when you run a near‑monopoly on a category of model is not just a symbolic gesture; it can meaningfully constrain a state’s options — at least until another vendor or an open‑source rival fills the gap.
In that sense, Mythos is a preview of a world where a small number of boards in San Francisco or London quietly decide how far governments can go with AI.
5. The European / regional angle
For European readers, the obvious question is: what will Berlin, Paris, Rome or Ljubljana do when their defence ministries want their own Mythos?
The EU AI Act classifies many security and critical‑infrastructure systems as “high‑risk,” demanding rigorous risk management, documentation and human oversight. National security itself is technically outside its scope, but the line is blurry. An NSA‑style deployment that “only” scans for vulnerabilities could easily bleed into automated offensive capabilities or large‑scale surveillance if ported into a European intelligence context.
Europe also has a long memory of U.S. surveillance overreach. After the Snowden revelations and the invalidation of the Safe Harbour and Privacy Shield data‑transfer schemes (Schrems I and II), trust in U.S. intelligence assurances is thin. If a U.S. company runs a classified‑grade model for European agencies, who really controls the system and the logs? The CLOUD Act still allows U.S. authorities to demand certain data from American providers, even when hosted in Europe.
There is a second, more strategic angle: industrial policy. If Mythos‑class systems become the default for cyber defence, the EU has a choice. Either accept reliance on a small set of U.S. labs — Anthropic today, OpenAI and others tomorrow — or invest heavily in European equivalents, perhaps via initiatives like GAIA‑X, EuroHPC and new defence‑tech funds. For smaller markets like Slovenia or Croatia, that decision will likely be taken in Brussels, Paris and Berlin — but it will determine what tools national CERT teams and security agencies use a decade from now.
6. Looking ahead
Expect three developments over the next few years.
1. A quiet boom in classified AI deployments. Mythos at the NSA will not be an outlier. Intelligence and defence organisations worldwide will start negotiating bespoke access to high‑end models, often under euphemisms like “research partnerships” or “secure previews.” The public will only hear about a fraction of them, usually when something leaks or fails spectacularly.
2. Regulatory collisions. The same Anthropic model family might be marketed to European banks as a compliant, well‑governed AI system under the AI Act, while its more capable sibling runs in classified mode for U.S. agencies. That raises awkward questions: how do European regulators audit risk when the real capabilities are hidden behind export controls and NDAs? And what happens if a vulnerability or misuse in the classified version spills into civilian infrastructure?
3. Pressure on open‑source and smaller players. As governments wrap frontier models in secrecy, they will likely push for tighter controls on open‑source systems they cannot easily monitor. We already see debates around “open weights” vs. restricted releases. Mythos strengthens the argument of those who claim powerful models should only live in the hands of a few “responsible” vendors — an argument that conveniently favours incumbent giants.
For readers — whether you work in security, policy or startups — the key is to watch not just the headline disputes (like the Pentagon labelling vendors a risk) but the quiet, often contradictory deals that follow. That is where the real power arrangements of the AI age will be forged.
7. The bottom line
The NSA’s reported use of Mythos shows how quickly lofty talk about “AI safety” collapses once national security enters the room. A model deemed too dangerous for the public became acceptable the moment it could boost state power. Europe now faces its own choice: emulate this approach, or insist on a more transparent, rules‑based path to AI‑driven security. When your government’s cyber team asks for its own Mythos, will there be any democratic debate at all — or will the decision be buried, like the model itself, behind a classified interface?



