AI Money vs. Democracy: What the Alex Bores Fight Reveals About the Next Phase of Tech Power

March 3, 2026
5 min read
Illustration of AI icons over the US Capitol and political campaign ads

1. Headline & Intro

AI regulation is no longer being written in white papers and think-tank panels; it’s being fought with attack ads and nine-figure war chests. The campaign to block New York assembly member Alex Bores’ run for Congress is not just a local U.S. story — it’s an early test of how far AI giants are willing to go to shape the rules that will govern them. In this piece, we’ll unpack what’s happening around Bores, why AI companies are suddenly obsessed with state politics, and what this power struggle means for the future of AI governance, including for Europeans who think this is “just an American problem.”

2. The News in Brief

According to TechCrunch, New York state assembly member Alex Bores — a former Palantir employee now running for Congress in New York’s 12th district — has become a primary target for a new Silicon Valley–backed super PAC called Leading the Future.

The PAC has raised around $125 million and is focusing on U.S. state-level races, especially candidates who support AI regulation. TechCrunch reports that the group plans to spend at least $10 million opposing Bores. Financial backers include Palantir co-founder Joe Lonsdale, OpenAI president Greg Brockman, Andreessen Horowitz, AI search startup Perplexity, and other prominent tech investors and founders.

Bores sponsored New York’s RAISE Act, an AI transparency law signed in December, which requires large AI labs (with more than $500 million in revenue) to publish and follow safety plans and to report catastrophic incidents. TechCrunch notes that Bores also backs bills forcing disclosure of training data sources and metadata for synthetic content.

On the other side, an Anthropic-backed PAC, Public First Action, is spending about $450,000 in support of Bores. Meta has separately committed $65 million to other tech-friendly PACs focused on state races, and AI-aligned donors gave at least $83 million to U.S. federal campaigns in 2025.

3. Why This Matters

The Bores case is a live demonstration of how AI power is moving from labs and app stores into legislatures. The sums involved are extraordinary: New York assembly races typically see total fundraising in the low six figures, as TechCrunch notes. Dropping eight figures into a single congressional opponent isn’t participation — it’s domination.

Who benefits? In the short term, large AI labs and the venture funds behind them. With a $125 million war chest, a PAC like Leading the Future can send a clear message to any state legislator considering a serious AI bill: regulate us, and we will make you a cautionary tale. That’s a textbook chilling effect.

Who loses? First, independent policymakers who actually understand the technology — which Bores arguably does as a former engineer and founder. If the only “acceptable” lawmakers are those who defer to industry, we drift toward regulatory capture long before a federal AI framework is even written.

Second, workers and citizens who want both innovation and guardrails. The RAISE Act is hardly radical: it asks big labs to have transparent safety plans and to report major failures. Many other industries would consider this a dream scenario compared with hard ex-ante controls.

The immediate implication is that AI regulation is being front‑loaded with political hardball. Instead of negotiating over specific rules, some of the biggest players are trying to decide who even gets a seat at the table. That’s more foundational — and more dangerous — than arguments over model thresholds or reporting formats.

4. The Bigger Picture

What’s happening around Bores fits a broader pattern that we’ve seen before — and that Europe should recognize.

First, this is reminiscent of the gig-economy ballot wars in the U.S. A decade ago, ride-hailing and delivery platforms poured hundreds of millions of dollars into state initiatives to avoid classifying drivers as employees. The goal wasn’t just to win a vote; it was to lock in a precedent before stricter rules spread. Leading the Future is playing a similar preemption game for AI.

Second, it reflects a split inside the AI industry itself. On one side: investors and founders who see almost any constraint as a threat to “progress.” On the other: actors like Anthropic and allied groups that are still aggressively pro-AI but willing to accept transparency, documentation and incident reporting. The fact that Bores is being funded and attacked by different “pro-AI” factions tells us that the battle is not AI vs. anti-AI, but which governance model for AI wins.

Third, the federal backdrop matters. TechCrunch notes that President Trump signed an executive order instructing federal agencies to go after “onerous” state AI laws, like New York’s. Combined with the PAC spending, you get a coordinated push: weaken states, centralize power in Washington, and then lobby heavily to make sure any federal framework is as light-touch as possible.

Compared to Europe’s AI Act, which is rooted in product safety and fundamental rights, this is almost the mirror image: instead of arguing over how strict the baseline should be, U.S. AI giants are trying to decide who gets to write the baseline.

5. The European / Regional Angle

For European readers, it is tempting to see this as a uniquely American pathology, enabled by U.S.-style super PACs and virtually unlimited corporate political spending. But there are at least three reasons not to dismiss it.

First, European rules do not exist in a vacuum. If U.S. federal law ends up much weaker than state initiatives like the RAISE Act, American firms will still operate globally under their home regime — and will lobby in Brussels, Berlin, Ljubljana or Madrid to dilute EU enforcement in practice. Europe has stricter rules on direct campaign financing, but lobbying, think-tank funding and “issue campaigns” can play a similar role.

Second, the Bores fight shows what happens when technical literacy enters politics. He’s not a generic privacy advocate; he’s a former Palantir employee who worked on real systems and then walked away over ethics. Europe also has a shortage of lawmakers with deep technical backgrounds. When they appear — whether in the European Parliament or national assemblies — they may face similar pushback from incumbents uncomfortable with being challenged on the details.

Third, this isn’t just about the U.S. market. AI companies backing these PACs either operate heavily in Europe today or soon will. The way they behave at home is a signal of how they see regulation everywhere else. If the default stance is “spend whatever it takes to crush the first serious regulator,” European institutions should assume a confrontational, not cooperative, posture from at least part of the industry.

For European startups positioning themselves as “trustworthy AI” or “regulation‑ready,” this conflict is also a strategic opening: if big U.S. labs become synonymous with political arm‑twisting, there is room for challengers who treat compliance and accountability as a feature, not a threat.

6. Looking Ahead

Three things are likely over the next 12–24 months.

First, AI will become a defined political cleavage, not a niche tech topic. In the U.S., state races will increasingly feature “for or against Sacramento/Albany AI rules” narratives, in the same way that ride-hailing and data privacy once did. Expect similar polarization in Europe as the AI Act moves from text to enforcement and member states implement national supervisory structures.

Second, money will keep escalating until there is a backlash. The TechCrunch numbers — $125 million for a single PAC, $65 million from Meta for two others, $83 million to federal races in one year — are early‑stage figures. If AI continues to be framed as the next general-purpose technology, those numbers will look like seed funding. The risk for industry is that, at some point, voters and regulators conclude that AI has simply become the new oil and treat it accordingly: stricter antitrust, tighter public procurement rules and more aggressive audits.

Third, candidates like Bores are a preview of a coming generation of technically fluent politicians. Some will align closely with industry, others will be critics, but either way the days when CEOs could simply overwhelm nontechnical legislators with jargon are ending. That makes early efforts to “make an example” of one of them particularly significant: if Bores loses badly under a wave of hostile ads, it may discourage others with similar profiles from entering politics.

For readers, the key things to watch are: whether the RAISE Act survives federal challenges; how many other states copy it or back away; and whether Europe’s AI Act ends up looking moderate or strict compared with whatever the U.S. finally adopts.

7. The Bottom Line

The campaign against Alex Bores is not just a fight over one congressional seat; it is an opening skirmish in a much larger war over who gets to shape the AI era. When AI firms spend tens of millions to punish a lawmaker whose flagship bill mostly demands transparency and safety plans, they signal that even modest guardrails are seen as existential. If this is how the industry behaves in 2026, how should societies respond in 2030, when AI systems are even more embedded in work, welfare and warfare?

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.