1. Headline & intro
Washington is quietly turning the biggest U.S. banks into test pilots for one of the most contentious AI models on the market. That alone should make anyone who cares about financial stability and AI governance pay attention. When the Treasury Secretary and the Fed Chair nudge Wall Street toward a single, highly capable model built by a company they’ve simultaneously branded a supply‑chain risk, something deeper is going on than a routine tech trial. In this piece, we’ll unpack what’s really at stake with Anthropic’s Mythos in the banking system, why regulators are split, and what this U.S. experiment means for Europe and the rest of the world.
2. The news in brief
According to TechCrunch, citing reporting from Bloomberg, U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell called in executives from major banks for a meeting this week. In that meeting, they reportedly encouraged banks to experiment with Anthropic’s new AI model, Mythos, specifically to help detect security vulnerabilities.
JPMorgan Chase is named as an initial partner for the model. Bloomberg’s reporting, referenced by TechCrunch, says that Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are also testing Mythos.
Anthropic announced Mythos this week and is deliberately limiting access. The company argues that Mythos is unusually good at finding security weaknesses, even though it wasn’t trained as a cybersecurity‑specific system. Some observers see that positioning as either security hype or a clever enterprise sales tactic.
Complicating matters, TechCrunch notes that Anthropic is currently in a court fight with the Trump administration over the U.S. Department of Defense labeling the company a supply‑chain risk after negotiations about government usage limits collapsed. The Financial Times, meanwhile, reports that U.K. financial watchdogs are also examining the risks posed by Mythos.
3. Why this matters
The world’s largest banks are critical infrastructure. When the U.S. Treasury and the Fed gently steer those institutions toward a single frontier AI model, they are doing more than promoting innovation; they are shaping the technological backbone of global finance.
There are clear winners in the short term. Anthropic gains enormous validation: if Mythos becomes embedded in the risk and security workflows of JPMorgan or Citi, that is the kind of reference customer money can’t normally buy. For U.S. officials, encouraging use of a domestically controlled model helps reduce dependence on foreign AI providers and aligns with a broader national‑security agenda: keep the most powerful tools close, but not just inside government.
Banks also stand to benefit. Large language models are already useful for code review and vulnerability discovery. A model that’s particularly strong at spotting weak points in complex, legacy systems could save millions in breach costs and regulatory fines. It could also help institutions make sense of sprawling, decades‑old IT estates that human teams struggle to fully map.
But the risks are at least as significant. First, concentration: if multiple systemically important banks all rely on the same proprietary model for security analysis, a flaw or backdoor in that model instantly becomes a systemic risk. Second, opacity: Mythos is a black box controlled by a private company currently in legal conflict with the same government that is nudging banks toward it. That raises governance and accountability questions regulators have barely begun to answer.
There is also a subtle incentive problem. If regulators treat “we used Mythos” as a de‑facto gold standard of diligence, banks may over‑rely on the tool and under‑invest in their own security expertise. Outsourcing your immune system rarely ends well.
4. The bigger picture
This episode sits at the intersection of three powerful trends.
First, the securitisation of AI. Frontier models are increasingly framed as dual‑use capabilities: tools that can defend networks just as easily as they can help discover or even design new exploits. Anthropic’s decision to restrict access to Mythos because it is “too good” at vulnerability discovery mirrors earlier debates over publishing offensive cybersecurity research or exploit toolkits. The difference is scale: a large model can surface weaknesses across millions of lines of code in hours.
Second, the quiet race to own AI in critical infrastructure. In recent years, we’ve already seen cloud vendors and analytics firms become deeply embedded in government and financial systems – think Palantir in public sector analytics or Microsoft in government cloud. Mythos in banking security is the next phase: a general‑purpose AI system becoming part of the core safety mechanisms of the financial system.
Third, regulatory schizophrenia. On one side of the U.S. government, the Department of Defense brands Anthropic a supply‑chain risk after the company pushes back on unconstrained military use of its models. On another, top economic officials are, according to Bloomberg’s reporting, effectively acting as sales champions for the same company’s most sensitive product.
This duality is not entirely new; governments have long had conflicting views on powerful technologies, from encryption to 5G equipment. But it signals where the AI industry is heading: a world where the same model might simultaneously be subject to export controls, national‑security vetting, and aggressive adoption pushes from economic policymakers.
For Anthropic’s competitors – OpenAI, Google, Meta and others – Mythos’s positioning as a quasi‑security model creates pressure. Do they respond with similar “defensive AI” products? Do they emphasise safer, more tightly scoped tools? Or do they quietly lobby regulators to avoid an effective mandate for a rival’s system?
5. The European / regional angle
From a European perspective, this looks like a familiar pattern: the U.S. moves fast to embed a domestic technology supplier into critical infrastructure, while European institutions are still drafting guidelines and debating risk classifications.
Under the EU AI Act, AI used in banking for risk assessment, fraud detection or other core functions will fall under stringent “high‑risk” requirements. Add cybersecurity in financial infrastructure to the mix, and Mythos‑like models almost certainly land in the most tightly regulated bucket. Banks in the eurozone already answer to the European Central Bank, national supervisors, and the new Digital Operational Resilience Act (DORA), which focuses specifically on ICT and cyber resilience.
If U.S. megabanks normalize the use of a powerful U.S. model for vulnerability discovery, European banks will face pressure to follow – from investors, from rating agencies, and possibly from their own regulators who don’t want to be seen as lagging on best practices. But EU law will require much more transparency: documentation of training data, model behavior under stress tests, and robust human oversight.
That creates an opening for European AI providers. Companies like Germany’s Aleph Alpha or France’s Mistral AI, as well as smaller regional players, can position themselves as “AI for critical infrastructure, born compliant with EU rules.” At the same time, the European Banking Authority and ECB will need to decide whether relying on a single U.S. vendor for model‑based security analysis is compatible with the EU’s broader push for digital sovereignty.
For European users and smaller banks, the risk is a two‑tier system: global giants with access to the most capable models and bespoke oversight regimes, and everyone else left with slower, more generic tools.
6. Looking ahead
Several fault lines will determine how this story unfolds.
First, Anthropic’s legal fight with the Trump administration matters more than it might seem. If the supply‑chain risk designation survives court challenges, federal agencies will be under pressure to limit their own reliance on Anthropic. That creates an awkward split if financial regulators – or systemically important banks under their watch – are simultaneously leaning into Mythos for critical security functions.
Second, regulators on both sides of the Atlantic will have to decide whether they treat models like Mythos as tools or as infrastructure. If they are just another third‑party tool, traditional vendor‑risk frameworks might suffice. If they are treated as infrastructure – akin to core payment systems or central clearing – we could see capital requirements, redundancy mandates, or even public‑sector alternatives.
Third, we should expect a wave of “AI for cyber” products branded explicitly around defense and resilience. Some will be genuinely helpful; some will mostly be marketing wrapped around a general‑purpose model with a security‑flavored interface. The hard question is how regulators and boards distinguish between them.
Watch for three specific signals over the next 12–24 months:
- Supervisory guidance from U.S. and European banking regulators on acceptable AI use in security and risk management.
- Incident reports where an AI model either prevents or, more worryingly, enables a major breach.
- Standard‑setting efforts – from bodies like ISO or the Financial Stability Board – that try to codify what “responsible AI in financial infrastructure” actually means.
For banks, the opportunity is real: done well, AI‑assisted security could move them from perpetual patching to proactive resilience. But the downside risk is systemic: a shared blind spot in a widely used model could turn into a synchronized failure across institutions.
7. The bottom line
Washington’s quiet push for Wall Street to test Anthropic’s Mythos is not just another AI pilot; it is an early rehearsal for how frontier models will be woven into the pipes of global finance. The move promises sharper defenses but concentrates risk in a single opaque system built by a company the U.S. government itself mistrusts. As Europe drafts its own rules and weighs domestic alternatives, the real question is whether regulators can harness these capabilities without creating a new, AI‑driven form of systemic risk.



