Trump’s National AI Plan Isn’t Just Light-Touch. It’s a Power Grab.
The White House has finally put its AI cards on the table — and they reveal much more than a bias toward “innovation.” Trump’s new national AI framework would strip states of most of their power to regulate the technology, give developers an expansive liability shield and push the burden of child safety from platforms onto parents. This isn’t a technical blueprint; it’s a political project that will shape how AI is built, governed, and contested for years. In this piece, we unpack who wins, who loses, and why Europe should care.
The news in brief
According to TechCrunch, the Trump administration has released a legislative framework for a single, national AI policy in the US. The proposal would preempt most state-level AI laws, arguing that AI development is inherently interstate and tied to national security and foreign policy. States would keep only narrow powers in areas like fraud, child protection, zoning and their own use of AI.
The framework calls for a “minimally burdensome” federal standard focused on accelerating AI adoption and removing “outdated” barriers to innovation. It emphasizes liability protections for developers, stating that states should not be able to punish AI makers for unlawful actions taken by third parties using their models.
On child safety, the document stresses parental responsibility rather than platform obligations, suggesting that Congress give parents more tools to manage children’s accounts and devices while only loosely encouraging companies to build protections against exploitation and self‑harm.
The framework also pushes for protections against government “censorship” of AI platforms, ties itself to earlier anti–“woke AI” efforts, and offers only vague language on copyright and training data.
Why this matters
At first glance, this looks like another skirmish in America’s endless federal vs. state tug‑of‑war. In reality, it’s a structural bet on who gets to shape the next computing platform.
The winners are obvious: large AI labs and platforms. A single, weak federal standard plus preemption of stricter state laws is regulatory heaven if you’re training frontier models, running consumer AI products at scale, or selling AI infrastructure to enterprises. Liability shields that insulate developers from downstream misuse further tilt the field in favour of those who can move fastest and externalise risk.
The losers are more distributed.
State lawmakers — especially in places like California and New York that have been experimenting with AI and platform accountability — would lose their role as early-warning systems. For the last decade, the most interesting tech regulation in the US (privacy, gig work, kids’ protections) has come from states, not Washington. This framework deliberately shuts that down.
Parents and children are also left exposed. The rhetoric of “parents know best” sounds empowering, but in practice it ignores the basic asymmetry of power and information. Parents don’t design recommender systems, content filters, or default settings; platforms do. Moving responsibility to households without setting hard, enforceable duties for companies guarantees uneven protection that tracks income, education, and time — not risk.
Finally, regulators themselves lose leverage. Once Washington defines AI primarily as an engine of growth and national security, any future attempt to tighten guardrails will be framed as anti‑innovation or even unpatriotic. The framing, not just the rules, is what’s being locked in.
The bigger picture
This framework doesn’t emerge in a vacuum. It crystallises three longer‑running trends in US tech policy.
First, it echoes the playbook used around Section 230 two decades ago: grant broad immunity to technology firms early, on the assumption that innovation and economic growth justify accepting a lot of social risk. Back then, the platform era was just starting; today, we’re at a similar inflection point for AI. Washington appears ready to repeat the “move fast, regulate later” experiment — despite still struggling to clean up the last one.
Second, it continues a deregulatory shift visible in Trump’s earlier AI strategy and in the recent executive order instructing federal agencies to challenge “onerous” state AI rules. If that order was the opening salvo, this framework is the artillery barrage: a comprehensive attempt to re‑centralise power while keeping the bar for obligations as low as politically possible.
Third, it embeds AI squarely inside America’s culture wars. The explicit focus on preventing “censorship” of lawful political speech on AI platforms, and the linkage to the administration’s anti–“woke AI” agenda, signal that content governance in AI systems will be fought along partisan lines. That makes it harder to coordinate responses to misinformation, election interference, or public‑health disinformation without being accused of ideological bias.
Compared to the EU’s upcoming AI Act, the contrast is stark. Brussels is building a risk‑based regime with detailed obligations for high‑risk use cases, transparency requirements, and enforcement by independent regulators. Washington is proposing a growth‑first model with soft expectations, limited oversight, and a strong tilt toward developer freedom. That divergence will define how global AI products are built and where companies choose to launch or test risky features.
The European and regional angle
From a European perspective, this framework is both familiar and unsettling.
Familiar, because it reprises the longstanding division of labour: the US as the permissive sandbox that incubates global tech giants, the EU as the cautious regulator that tries to retrofit rights and safeguards after the fact. The GDPR vs. US data brokers; the Digital Services Act vs. Section 230; now the AI Act vs. America’s “minimally burdensome” standard.
Unsettling, because AI systems trained and deployed under laxer US rules will not stay within US borders. Foundation models developed in San Francisco or Seattle will power chatbots in Berlin, Ljubljana, Madrid, or Zagreb. Even if they are technically “adapted” for EU compliance, the underlying incentives — minimal liability, maximum data freedom, limited external oversight — are being set elsewhere.
European companies could face a competitive squeeze. Startups in Berlin or Barcelona will have to comply with the AI Act’s documentation, risk‑management and transparency demands from day one, while US rivals train on more data with fewer constraints and then selectively harden products for the EU market. That’s not hypothetical; it’s exactly what happened with privacy.
At the same time, Europe’s regulatory choices become more consequential. If the US formally shields AI developers from many forms of liability and sidelines state experimentation, the EU may effectively become the only major democratic bloc where civil society and regulators can seriously contest AI harms. That will increase pressure on European data‑protection authorities, digital regulators and courts — and turn transatlantic coordination on AI into a much harder, more political negotiation.
Looking ahead
The framework is still just that — a proposal. To become law, Congress would have to translate it into legislation, reconcile it with existing sectoral rules, and survive inevitable court challenges from states and civil‑rights groups.
Three fault lines are worth watching.
1. How broad will preemption be?
States will fight hard to preserve room to regulate AI in employment, housing, healthcare, and elections. Expect coalitions led by California, New York, and perhaps Colorado to argue that the Commerce Clause doesn’t justify wiping out all state experimentation. The eventual text of any bill — especially its definitions of “AI development” and “interstate” — will matter enormously.
2. What does the liability shield actually cover?
The phrase about not “penalising developers for third‑party misuse” could evolve into a de facto Section 230 for AI. The details — exceptions for egregious negligence, obligations for safety testing, requirements for model‑use policies — will determine whether the shield is calibrated or absolute.
3. How far does the ‘anti‑censorship’ language go?
If written loosely, it could chill even good‑faith coordination between platforms and governments on disinformation, foreign interference or imminent threats. Courts will then be asked to draw lines between coercion and cooperation, political pressure and standard risk‑sharing.
On child safety, expect a noisy political battle but limited substance unless there is a major, headline‑grabbing scandal involving AI and minors. As long as the debate is framed as “parents vs. nanny state,” platforms can keep promising tools while resisting strict duties.
For European policymakers and companies, the key will be to track how far the US moves toward a developer‑centric model — and to prepare for a world where transatlantic AI governance is not converging but diverging.
The bottom line
Trump’s AI framework is less about solving concrete harms than about locking in a philosophy: AI as strategic infrastructure that should be governed centrally, lightly, and in ways that protect developers more than citizens. That may buy short‑term speed, but it also repeats mistakes from the social‑media era on a much more powerful technology stack. The real question for both Americans and Europeans is simple: who should carry the risk of AI — families and states, or the companies that profit from it?



