Who Really Runs AI? Inside the New Lobbying Arms Race Over the Algorithms

February 27, 2026
5 min read
Illustration of politicians, lobbyists and AI systems battling over AI regulations

Who really runs AI? Follow the money, not the models

The most consequential AI breakthroughs in 2026 may not be new models, but new laws. While engineers argue about context windows, political operatives are weaponising super PACs and state bills to decide who sets the rules for artificial intelligence. A recent episode of TechCrunch’s Equity podcast with New York assemblymember Alex Bores is an unusually clear glimpse behind the curtain: AI safety law as political test case, billion‑dollar lobbying machines spinning up, and even the Pentagon playing chicken with Anthropic. This isn’t an abstract policy debate anymore – it’s a power struggle over who actually governs the emerging infrastructure of the 21st century.

In this piece, we’ll unpack what’s happening, why the Bores story matters far beyond New York, how it fits into a larger global trend – and what it means for Europe.


The news in brief

According to TechCrunch’s Equity podcast, New York State assemblymember (and U.S. congressional candidate) Alex Bores has become an unexpected central figure in the U.S. AI regulation fight.

Bores sponsored New York’s first dedicated AI safety law, the RAISE Act, described on the show as a “first‑of‑its‑kind” framework and a potential template for other U.S. jurisdictions. After pushing the bill, Bores reportedly became a primary target of a Silicon Valley–aligned lobbying group, structured as a super PAC with around $125 million available for attack ads.

On the other side, TechCrunch reports the emergence of a pro‑regulation super PAC backed in part by Anthropic, which has committed $20 million. The episode also touches on tensions between the Pentagon and Anthropic over who ultimately controls how military AI systems are used, growing community resistance to data‑center construction, and Bores’ next steps: proposals on training‑data disclosure, content provenance and a 43‑point national AI policy framework.


Why this matters: AI’s rulebook is being privatized

Strip away the acronyms and this story is about one thing: who gets to write the AI rulebook – elected lawmakers, defence agencies, or a small circle of AI labs and their funders.

The immediate lesson from the Bores saga is that AI policy has crossed a threshold. If a state‑level lawmaker can attract a $125 million negative‑ad budget, AI regulation is now seen as systemically important by deep‑pocketed interests. That sort of money is not spent to win an abstract debate about ethics; it’s spent to lock in structural advantages.

Who benefits?

  • Incumbent frontier labs stand to gain from strong but narrow safety rules: the cost of compliance is high but manageable for them, and ruinous for smaller open‑source competitors.
  • Light‑touch‑regulation advocates benefit if they can paint any safety law as “anti‑innovation socialism”, scaring moderates away from acting at all.
  • Consultancies, auditors and AI compliance vendors will quietly cheer either outcome; more complexity means more billable hours.

Who loses?

  • Smaller startups and open‑source projects risk being crushed between expensive compliance and hostile lobbying that casts them as reckless.
  • Local communities lose leverage if AI rules are effectively written in PAC boardrooms before town councils can even understand the impact of another 200MW data centre on their grid.

The Pentagon–Anthropic standoff that TechCrunch highlights is an even starker signal. When a private company can push back on the U.S. Department of Defense over operational control of AI systems, we’re watching the militarisation of corporate AI governance. That raises uncomfortable questions: if the same labs are shaping both battlefield AI and the laws that govern it, where are the democratic checks?

The Bores episode should be read less as a quirky New York story and more as a dress rehearsal: AI safety as the next big domain of regulatory capture – or of regulatory pushback, if lawmakers hold their nerve.


The bigger picture: From social media’s failure to AI’s second chance

The TechCrunch interview lands at a moment when governments are painfully aware they botched social media regulation. Platforms operated in a largely lawless space for a decade, externalising the costs of disinformation, mental‑health harm and political radicalisation. Now there’s a clear fear in policy circles: are we about to repeat that mistake with AI?

Three broader trends intersect here:

  1. Regulation as competitive strategy. Frontier labs increasingly use safety rhetoric to support regulations that mirror their own internal processes. This is classic “regulatory entrepreneurship”: shape the rules so that your current way of working becomes the legal gold standard.

  2. From voluntary principles to hard law. In the U.S., the White House AI executive order and voluntary safety commitments are still mostly soft power. The RAISE Act, as described on Equity, is part of the shift towards enforceable obligations – something the EU has already embraced with the AI Act.

  3. Infrastructure backlash. The podcast’s reference to communities blocking data centres is part of a global pattern. In Europe, Ireland and the Netherlands have seen moratoria and pushback over energy and water use. As models get larger and more power‑hungry, land‑use and grid politics become AI politics.

Compared with competitors, Anthropic’s $20 million bet on a pro‑regulation PAC is especially revealing. It suggests the company believes a relatively strict regime is inevitable – and that their best move is to help design it. Others may follow. We’re likely to see the same split we saw in Big Tech around privacy: one camp arguing that strong rules cement trust and market power, the other warning that any constraint hands advantage to China.

Underneath all of this is a basic strategic choice: will AI end up regulated more like biotech and finance (licensing, audits, liability) or like social media (light rules, post‑hoc outrage, and years of damage control)? The TechCrunch conversation with Bores is one of the first concrete signs that at least some U.S. actors are trying to push it toward the former – while others invest heavily to keep it closer to the latter.


The European angle: Brussels has moved, Washington is improvising

For European readers, the Bores story is oddly familiar – but flipped.

The EU has already agreed on the AI Act, a horizontal regulation that classifies systems by risk, restricts certain use cases and introduces obligations around transparency, data governance and human oversight. In Brussels, the fight now is about implementation details, not whether to regulate at all.

The U.S., by contrast, is improvising from the bottom up: state laws like New York’s RAISE Act, voluntary federal frameworks, and now super PACs trying to steer the narrative. That fragmentation matters for European companies:

  • Compliance complexity. A European startup selling into the U.S. may soon face a patchwork of state rules, some inspired by Bores’ framework, others written by industry lobbyists to undercut it.
  • Regulatory arbitrage. If U.S. law stays weak while the EU AI Act bites, some firms will be tempted to keep sensitive experimentation stateside and ship only polished products into Europe.
  • Lobbying spill‑over. The kind of money TechCrunch describes in U.S. AI super PACs will not stay confined to America. Brussels has already seen intense Big Tech lobbying over the AI Act; expect a second wave focused on standards, enforcement and future revisions.

For Europe, the lesson is twofold. First, having a horizontal law early was a strategic win; it prevents the kind of vacuum in which super PACs can define the whole debate. Second, the job is far from done: if the EU under‑funds enforcement or allows too many carve‑outs, it could end up with the worst of both worlds – complex rules on paper, and de facto governance by the largest AI vendors in practice.


Looking ahead: What to watch in the next 24 months

Several trajectories now look likely.

  1. Explosion of AI‑focused political spending. The $125 million and $20 million figures cited by TechCrunch are almost certainly the floor, not the ceiling. Expect more AI‑branded super PACs, some openly industry‑funded, others backed by ideological groups framing AI as an existential threat or a culture‑war battleground.

  2. Template bills and copy‑paste regulation. If the RAISE Act is indeed seen as a blueprint, U.S. state legislatures will start copying and tweaking it. That’s good for harmonisation, but it also makes the initial design incredibly high‑stakes: any loophole will be replicated nationwide.

  3. Mil‑AI governance flashpoints. The Pentagon–Anthropic tug‑of‑war could be the first of many public clashes over military AI. Key questions to watch: Who has final veto power over deployment? Are there meaningful external audits? What happens when corporate safety policies conflict with classified national‑security priorities?

  4. Training‑data and provenance rules going global. Bores’ stated focus on training‑data disclosure and content provenance mirrors growing interest in watermarking and provenance standards such as C2PA. Over the next two years, expect these ideas to migrate into procurement rules, media regulations and cross‑border trade discussions.

For companies and developers, the opportunity is to engage early rather than treat regulation as a nuisance to be handled by legal teams. Technical voices are badly needed in these debates; otherwise, the vacuum will be filled by lobbyists and campaign consultants who see AI primarily as a fundraising hook.

The biggest open question: can democracies design AI rules that are both substantive (real safety, real accountability) and pluralistic (not simply captured by one camp of labs or activists)? The answer will shape not only who gets rich from AI, but who bears the risks.


The bottom line

The TechCrunch Equity episode with Alex Bores is an early warning: AI governance is no longer a sleepy committee topic but a high‑budget political battlefield. State‑level laws like New York’s RAISE Act, super PACs with nine‑figure war chests and even Pentagon–lab standoffs are all symptoms of the same shift – the privatization of AI rule‑writing. Europe has a head start with the AI Act, but it is not immune to the same pressures. The real question for readers is simple: are you comfortable letting a handful of companies and campaign strategists decide how intelligent machines are allowed to shape your society?

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.