AI Money Has Chosen Sides: What Anthropic’s Political Bet Really Means

February 20, 2026
5 min read
Illustration of the US Capitol with AI circuitry overlayed and opposing political campaign posters

AI Money Has Chosen Sides: What Anthropic’s Political Bet Really Means

For the first time, frontier AI labs are openly spending millions against each other in a US election. That should make anyone who cares about how AI will be governed sit up.

According to TechCrunch’s report, Anthropic is now bankrolling a political committee that’s effectively going to war with a rival, industry-backed AI super PAC over a single congressional race in New York. Behind the ads and acronyms is a much bigger story: the fight over who writes the rules for powerful AI systems — and whether those rules prioritise safety, growth, or corporate control.

This isn’t just American political drama. It’s an early sketch of how AI power will try to shape democracy everywhere.

The news in brief

As reported by TechCrunch, New York State Assembly member Alex Bores, now running for Congress in New York’s 12th district, has become a test case for AI money in politics.

A super PAC called Leading the Future – described as pro‑AI and reportedly funded with over $100 million from backers including Andreessen Horowitz, OpenAI president Greg Brockman, AI search startup Perplexity and Palantir co‑founder Joe Lonsdale – has spent about $1.1 million on ads opposing Bores. The group is attacking him largely because he sponsored New York’s RAISE Act, which obliges major AI developers to disclose safety practices and report serious misuse of their systems.

Now, a rival committee has entered the race. Public First Action, backed by a $20 million donation from Anthropic (according to Bloomberg, cited by TechCrunch), is spending $450,000 to support Bores. Public First Action is also pro‑AI, but argues for transparency, safety standards and public oversight as central pillars of its vision.

Why this matters

This race is not really about one congressional seat in New York. It’s about who gets to define “pro‑AI”.

Leading the Future embodies the Silicon Valley maximalist view: AI is an engine of economic growth and geopolitical power, and regulation should be minimal and industry‑friendly. Bores became a target not because he is anti‑technology, but because he backed the RAISE Act – modest, state‑level guardrails around safety and reporting. From the PAC’s perspective, that’s a dangerous precedent: if one legislator can make AI firms open their black boxes, others might follow.

Anthropic’s backing of Public First Action signals a competing narrative: that being pro‑AI can (and should) mean strong safety regimes, disclosure and public accountability. Anthropic has long marketed itself as the lab that takes existential risk seriously. Putting $20 million behind a PAC willing to defend a regulator‑minded candidate raises the stakes: it is no longer content to just lobby quietly in Washington; it is ready to pick winners and losers at the ballot box.

Who benefits?

  • Frontier labs with a safety brand gain political leverage and moral positioning.
  • Growth‑at‑all‑costs investors get a powerful vehicle to punish would‑be regulators.

Who loses?

  • Smaller AI startups, which can’t afford PACs, risk being squeezed between two heavily funded visions they don’t control.
  • Public institutions, which now have to navigate regulation under the shadow of duelling, ultra‑well‑funded interests.

The immediate implication: any lawmaker considering serious AI oversight is now watching this race very closely. The message from both sides is clear: regulate us, and millions will be spent either to destroy you — or to save you.

The bigger picture

We’ve been here before, just not with AI.

When Uber and Lyft fought local transport rules, when telecoms giants battled net neutrality, when crypto firms poured tens of millions into US races through PACs like Fairshake, the pattern was the same: new tech industries try to lock in favourable rules before society catches up.

What’s different now is the speed and concentration of power. A handful of AI labs and investors already control models that can influence information flows, labour markets and national security. They are now also learning to control political narratives, one congressional primary at a time.

This showdown slots neatly into several broader trends:

  • Regulation is finally coming. The US has issued an AI executive order; the UK hosted an AI Safety Summit; the EU has agreed the AI Act. The era of AI as a regulatory wild west is ending, and industry players know it.
  • Capital is polarising around competing ideologies. Anthropic leans into safety, red‑teaming and long‑term risk; many of the Leading the Future backers are outspoken techno‑optimists who see fears about AI as overblown or even harmful.
  • Lobbying is shifting from backroom to battleground. Instead of only shaping draft texts in Brussels or Washington, AI companies are now trying to pre‑empt unfriendly politicians ever reaching office.

Compared with Europe, the US system is uniquely fertile ground for this experiment. Super PACs can raise and spend unlimited sums as long as they do not coordinate directly with campaigns. For AI labs, this offers a relatively low‑friction way to project power.

This contest between Anthropic‑aligned and a16z/OpenAI‑aligned money is an early sign that there will be no single “AI lobby”. Instead, we’re heading for a messy ecosystem of overlapping, competing blocs – safety‑first, growth‑first, open‑source, national‑security and more – each trying to stamp their worldview onto law.

The European / regional angle

From a European vantage point, this story is a warning and an opportunity.

The warning: Europe cannot outsource AI governance to US electoral politics. The labs driving this fight – Anthropic, OpenAI‑adjacent figures, Palantir alumni – build systems used globally, including by European companies and governments. If US lawmakers become afraid to pass serious AI rules for fear of being drowned in attack ads, the default global standard will drift towards the lowest common denominator.

Yet the EU has chosen a different path. The EU AI Act, the Digital Services Act and the Digital Markets Act collectively signal that Europe is willing to impose hard obligations on powerful platforms and models: transparency requirements, risk assessments, obligations for "systemic" players. Corporate political spending is also far more constrained in most EU member states than in the US, limiting the kind of super‑PAC arms race we see around Alex Bores.

But Europe is not immune. Instead of super PACs, influence here tends to flow through industry alliances, think‑tanks and Brussels‑based lobbying operations. As American AI money radicalises its tactics, expect more of that playbook to be quietly imported into EU politics.

The opportunity: this visible split between “growth at any cost” and “safety with oversight” gives European regulators leverage. They can point to Anthropic’s pro‑oversight stance as evidence that serious AI players can live with strong rules – and that loudest voices from Silicon Valley do not speak for the whole sector.

For European startups and research labs, the key is not to imitate US‑style PAC warfare, but to ensure they have a coherent, independent voice in Brussels debates that is not simply an echo of whichever US bloc shouts loudest.

Looking ahead

This New York race is unlikely to be the last AI‑infused contest of 2026 – or the most important. Think of it as a public beta test.

If Leading the Future succeeds in knocking out Bores, other legislators will read it as a clear deterrent against proposing robust AI rules. If Anthropic’s side prevails, we may see more candidates explicitly running on a “pro‑innovation but pro‑safety” platform, backed by labs eager to prove that regulation and growth can coexist.

Several things are worth watching over the next 12–24 months:

  • Scale: does AI money remain focused on a handful of primaries, or does it expand to dozens of races in 2028?
  • Transparency: will US regulators, or even the labs themselves, voluntarily provide clearer reporting on where AI‑linked political money is going?
  • Backlash: at what point do voters, or rival politicians, start to campaign against "AI‑bought" seats in Congress?
  • Transatlantic spillover: do we see analogous efforts in the UK, where campaign finance rules are looser than on the continent, or more aggressive lobbying pushes in Brussels once the AI Act implementation phase begins?

The biggest risk is regulatory capture: that the same small pool of companies building frontier models ends up effectively writing the rules that govern them, via a mix of lobbying, research funding and now direct electoral influence. The counter‑risk is over‑correction – a populist backlash that treats any AI involvement in politics as inherently illegitimate, freezing useful innovation.

The bottom line

Anthropic’s decision to bankroll a PAC that backs a pro‑regulation candidate against an industry‑funded AI super PAC marks a new phase: AI firms are no longer just subjects of regulation but active political power centres.

Competing pro‑AI visions may be healthier than a single united front, but we should be honest about what is happening: the future of AI governance is being auctioned, one campaign at a time. The real question for voters and regulators – in the US, in Europe and beyond – is whether we are comfortable letting the rules for transformative technologies be set in the shadow of these chequebooks, or whether more democratic, transparent mechanisms can still seize back control.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.