From AI Czar to Billionaire Council: What David Sacks’ Move Really Signals for AI Power
1. Headline & intro
David Sacks is no longer Donald Trump’s “AI and crypto czar,” but he hasn’t left the stage – he’s just moved to a different part of it. His shift to co-chairing the President’s Council of Advisors on Science and Technology (PCAST) looks, on paper, like a downgrade in power. In practice, it may prove more consequential for how AI, chips, and nuclear tech are governed in the U.S. – and indirectly, the rest of the world.
This is a story about more than one investor’s job title: it’s about how a handful of tech billionaires are positioning themselves to write the rulebook for the next decade of AI.
2. The news in brief
According to reporting by TechCrunch, David Sacks has completed a 130‑day stint as the Trump administration’s special government employee focused on AI and crypto policy. He confirmed to Bloomberg that he is transitioning to co-chair the U.S. President’s Council of Advisors on Science and Technology (PCAST), alongside White House tech adviser Michael Kratsios.
Unlike his “AI czar” role, which gave him direct access to President Trump and a hand in shaping concrete policy, PCAST is a federal advisory body. It studies issues, writes reports and sends recommendations to the administration but does not itself make binding rules.
This new PCAST is unusually dominated by tech industry leaders. As reported by TechCrunch, the first 15 members include Nvidia’s Jensen Huang, Meta’s Mark Zuckerberg, Oracle’s Larry Ellison, Google co‑founder Sergey Brin, AMD’s Lisa Su, and Michael Dell, among others. Sacks says the council will focus on AI, advanced semiconductors, quantum computing and nuclear power, and will help push Trump’s recently announced national AI framework aimed at replacing conflicting state-level rules.
3. Why this matters
The headline change – from “czar” to “advisor” – sounds like a loss of power. But the composition of PCAST suggests something more subtle: a consolidation of AI governance inside a tight circle of incumbent tech giants and aligned investors.
Who benefits?
- Big U.S. platforms and chipmakers gain a direct, coordinated channel to shape the narrative about “responsible” AI, semiconductor strategy and national competitiveness. Having their CEOs on PCAST means policy debates will start from the assumptions and incentives of companies that already dominate the market.
- Sacks personally gets something like a “soft landing”: less day‑to‑day political heat, fewer expectations to negotiate messy compromises, but a prestigious platform and continued influence over the intellectual framing of AI policy.
Who loses?
- Smaller AI startups, open‑source communities, labor groups and civil society organizations are notably absent from this power table. Their concerns about concentration, labor displacement or open infrastructure are now one layer further removed from the White House.
- U.S. states – especially California and New York, which have been experimenting with their own AI rules – are clearly being targeted. Sacks explicitly framed state-level laws as a problematic “patchwork.” The national AI framework he champions is likely to seek preemption, blunting more ambitious state regulation.
This move also matters because of Sacks’ unresolved conflicts of interest. As TechCrunch previously reported, he obtained ethics waivers allowing him to keep financial stakes in AI and crypto ventures while influencing federal policy on both. Those same interests now sit behind his role in helping to steer the government’s long-term AI and tech strategy.
4. The bigger picture
Sacks’ reappearance as PCAST co-chair fits a pattern we’ve seen repeatedly in U.S. tech governance: when a technology becomes geopolitically critical, Washington outsources much of the thinking to the people building and monetising it.
We saw an earlier version of this with Eric Schmidt’s leadership of the National Security Commission on Artificial Intelligence (NSCAI) during the late 2010s and early 2020s, which heavily framed AI as a U.S.–China arms race and catalysed large public‑private programs. We see it again in voluntary AI safety commitments negotiated with a small club of foundation model companies, rather than codified rules binding the whole ecosystem.
PCAST itself has a mixed legacy. TechCrunch notes that Obama’s council issued dozens of reports, but only a small fraction turned into concrete policy. Trump’s first‑term council barely registered. What’s different now is the degree of corporate concentration: this is not a balanced mix of academia, civil society and industry; it’s a roster of the very firms that stand to win or lose the most from AI regulation.
At the same time, the global environment has shifted. The EU has passed a binding AI Act. The UK is trying to position itself as a “regulation‑light” AI hub. China is rolling out sector‑specific AI rules focused on social stability and censorship. In that context, Trump’s national AI framework – pushed by a council of CEOs – looks like an explicit attempt to define a third way: aggressively pro‑industry, nationally centralised, and sceptical of both state‑level initiatives and Brussels‑style rulebooks.
5. The European / regional angle
For Europe, this development is both a warning and an opportunity.
On one hand, a U.S. AI strategy written with heavy input from Nvidia, Meta, Google and Oracle risks deepening regulatory divergence. The EU AI Act is built around risk categories, mandatory impact assessments, and clear enforcement powers. A U.S. framework crafted by incumbents is likely to stress innovation, voluntary standards and liability shields. That will complicate life for transatlantic companies that have to comply with both.
On the other hand, the optics of a billionaire‑dominated PCAST may actually strengthen Europe’s political case. Brussels has long argued – through GDPR, the Digital Services Act and the Digital Markets Act – that self‑regulation by large platforms does not work. PCAST’s membership could be held up in EU debates as a textbook example of “regulatory capture” to avoid.
For European AI startups, the risk is that U.S. rules become de facto global technical standards, especially around safety benchmarks, model evaluation and chip export policy – all areas PCAST is poised to touch. If those standards are defined in ways that favour hyperscalers, European challengers could be boxed into less profitable niches.
Finally, there is a cultural dimension: European publics are generally more sceptical of mixing high office with large personal stakes in regulated sectors. Sacks’ ethics waivers would be far harder to imagine under many European national rules. That contrast will sharpen transatlantic debates about how to build legitimate AI governance.
6. Looking ahead
Expect three main battles in the coming 12–24 months.
1. Federal vs. state power in U.S. AI regulation.
Trump’s national AI framework, with Sacks cheering it on from PCAST, is likely to seek strong federal preemption. California, Colorado, New York and others will not back down easily from their own AI and data bills. The outcome will determine whether AI regulation in the U.S. looks more like financial regulation (federalised) or consumer protection (often state‑driven).
2. The scope of PCAST’s influence.
Historically, councils like PCAST can either produce shelfware or become the intellectual backbone for real policy. Watch for early reports on AI safety, compute infrastructure and export controls. If those documents start being cited in executive orders or agency rulemakings, you’ll know this PCAST is more than a talking shop.
3. Sacks’ dual role as investor and agenda‑setter.
Even without a formal government job, Sacks remains a high‑leverage node: venture capitalist, podcast host, political donor and now co‑chair of a marquee advisory body. The key question is whether he recuses himself from areas where Craft Ventures is heavily exposed – or whether those conflicts simply get normalised.
Globally, expect more countries to copy elements of the U.S. approach if it is seen as growth‑friendly, and to copy Europe’s if early scandals highlight the downsides of CEO‑centred governance. Either way, the window for designing AI rules from a blank slate is closing fast.
7. The bottom line
David Sacks’ shift from AI czar to PCAST co‑chair is not a retreat; it’s a repositioning. Day‑to‑day policymaking may move out of reach, but the strategic framing of U.S. AI, chips and nuclear policy is being placed squarely in the hands of a small, invested elite. If you care about who gets to define “responsible AI” for the rest of us, the real question isn’t whether Sacks is in the Oval Office – it’s whether anyone at the table doesn’t own a data centre.



