The OpenAI mafia: How one lab’s alumni quietly took over the AI startup map

February 20, 2026
5 min read
Illustration of interconnected AI startup founders linked in a network

Headline & intro

The term “PayPal mafia” is getting a sequel, and this time it’s built on GPUs instead of payment rails. A dense web of companies founded by former OpenAI staff now stretches across almost every corner of the AI economy: foundation models, search, robotics, climate tech, even superconductors. This isn’t just a quirky bit of Silicon Valley trivia. It’s becoming one of the most important power structures in global AI. In this piece we’ll look at what TechCrunch’s new list of OpenAI alumni startups actually tells us about money, influence and risk in today’s AI industry – and what it should signal to regulators and European founders.


The news in brief

According to TechCrunch, at least 18 notable startups have been founded by former OpenAI employees over the past few years. The list ranges from heavyweight rivals like Anthropic and Elon Musk’s xAI to more specialised players such as Perplexity (AI search), Adept and Applied Compute (AI agents for work), Covariant and Prosper Robotics (robotics), and DeepTech bets like Periodic Labs (materials discovery) and Living Carbon (engineered plants).

Several of these companies sit in the industry’s top valuation tier. TechCrunch reports Anthropic at around $380 billion, Safe Superintelligence at $32 billion pre-product, Perplexity at $20 billion and Thinking Machines Lab at $12 billion. Others have raised enormous early rounds: Periodic Labs with $300 million in seed funding, Adept with $350 million and so on.

Geographically, the companies cluster in the Bay Area, with a few nodes in London and Tel Aviv. Many of the founders still invest in or collaborate with OpenAI, creating a dense alumni network spanning founders, VCs and Big Tech acquirers.


Why this matters

The OpenAI “mafia” is more than a feel‑good story about entrepreneurial alumni. It’s a signal that frontier AI is consolidating around a surprisingly small, socially tight community.

First, look at where capital and talent are going. In an era when most startups struggle to raise a seed round, pre‑product spin‑outs like Safe Superintelligence or Periodic Labs can attract hundreds of millions or even billions largely on the strength of the founders’ OpenAI pedigree. That’s not just faith in their technical skills; it’s investors betting that proximity to the OpenAI power graph – shared backers, former colleagues, compatible technical stacks – will amplify their odds of dominating new AI niches.

The winners here are obvious: alumni founders, the investors who know how to work this network, and Big Tech platforms like Amazon and SpaceX that can quietly acqui‑hire or partner with these teams without the optics of buying OpenAI itself. Amazon bringing in Adept’s founders and Covariant’s leadership fits a pattern: don’t buy the whole lab, just siphon off specialised capabilities.

The losers are more subtle. Independent AI startups without this pedigree face an even steeper uphill battle for capital, GPUs and press attention. And the public interest risks being shaped by a relatively homogeneous group of people who share not just similar technical assumptions, but also a common workplace culture and incentive structure forged at OpenAI.

Instead of genuine competition, we may be watching the formation of an AI cartel in everything but name: ostensibly separate companies, deeply interlinked socially and financially, racing along the same capability and monetisation tracks.


The bigger picture

Silicon Valley has seen this movie before. The PayPal mafia seeded Tesla, SpaceX, LinkedIn, Yelp and more. Google and Facebook alumni have quietly powered a generation of SaaS and consumer apps. But the OpenAI network is different in three crucial ways.

First, the capital intensity is off the charts. Building a competitive frontier model or AI-intensive research platform may require billions in compute and infrastructure. That skews the field toward founders who can unlock mega‑rounds on reputation alone – exactly what an OpenAI stamp enables. It also encourages early alignment with hyperscalers, because few others can provide that scale of GPU access.

Second, the stakes are qualitatively higher. When PayPal alumni built new companies, they changed markets. OpenAI alumni are helping shape cognitive infrastructure: systems that will mediate how people search, learn, work and even design new materials or biological systems. Safety culture, model governance and data practices spread through this alumni network will have outsized impact far beyond their own balance sheets.

Third, the network blurs competitive lines. TechCrunch’s list already includes cases where founders leave, spin up a rival (Anthropic, xAI), then see staff or co‑founders boomerang back to OpenAI or other incumbents. That kind of porous boundary makes it harder to treat these organisations as truly independent in strategic or regulatory terms.

This trend also intersects with other industry shifts:

  • Big Tech’s turn to acqui‑hiring AI teams rather than acquiring entire companies, partly to dodge antitrust heat.
  • A growing class of “AI natives” who move fluidly between labs, cloud providers and startups, creating an informal standard‑setting community that exists outside formal regulation.
  • The emergence of specialised AI verticals – agents, scientific discovery, climate, robotics – where alumni instantly define the benchmark to beat.

If the 2010s were about startup ecosystems, the 2020s are increasingly about talent ecosystems – and OpenAI’s is becoming the central one.


The European / regional angle

From a European perspective, the OpenAI mafia is both a warning and a blueprint.

The warning: once a single research lab accumulates enough prestige, compute and distribution, its alumni can effectively colonise an entire global market. Most of the companies in TechCrunch’s list are U.S.-based, backed by U.S. or Gulf capital, and architected around U.S. cloud providers. Europe risks becoming a downstream customer of a tightly knit San Francisco–centric AI elite.

The blueprint: Europe already has proto‑mafias of its own – DeepMind alumni across the U.K. and Berlin, ex‑DeepL engineers in German NLP startups, researchers spinning out of ETH Zurich, EPFL, INRIA or TUM. But these networks have not yet produced the same density of mega‑funded frontier labs.

Here, regulation cuts both ways. The EU AI Act, GDPR and the Digital Markets Act constrain some of the most aggressive data and deployment strategies, making it harder to move fast and break things. At the same time, they create a competitive angle for European founders to build “compliance‑native” AI companies that global enterprises can actually deploy without legal panic.

The real risk is brain drain. If the most ambitious European researchers still feel they must do a tour at OpenAI, Anthropic or xAI before being fundable at scale, the centre of gravity remains in Silicon Valley. Without matching pools of public and private capital – or bold procurement from EU institutions – Europe will mostly be investing as LPs in someone else’s AI future.


Looking ahead

Three trajectories are worth watching.

1. The next wave of spin‑outs. If OpenAI and Anthropic both move toward IPOs, early employees will suddenly have the liquidity to take bigger risks. Expect another crop of alumni‑led startups around 2027–2028, especially in highly specialised areas like AI‑native chips, safety tooling, and autonomous lab infrastructure.

2. Regulatory response. As this network’s influence grows, antitrust and AI‑safety regulators will eventually have to treat the OpenAI alumni ecosystem as a connected system, not a bunch of isolated firms. Questions like “Are these companies competing or coordinating?” or “Do shared investors and board members soften rivalry?” will move from think‑tank panels into formal inquiries.

3. Standard‑setting power. Many of these startups are quietly defining norms – from how AI search handles attribution (Perplexity) to how labs think about “safe superintelligence” (SSI) or agents in the workplace (Adept, Applied Compute, Worktrace). If the same social circle sets both the technical benchmarks and the ethics vocabulary, outsiders – including European regulators – will find themselves reacting rather than steering.

For European founders, this is also an opportunity: position yourselves as the credible counterweight. Build companies that interoperate with, but are not dependent on, this ecosystem: open‑source models, sovereign cloud offerings, strong on‑device capabilities, and tools explicitly designed to meet EU legal standards.


The bottom line

The rise of an OpenAI mafia shows that in frontier AI, power no longer resides in single companies but in alumni networks that radiate capital, norms and influence. That can supercharge innovation, but it also risks locking the world into one narrow vision of how AI should work – and who should profit from it. The real question, especially for Europe, is whether we are content to live in that orbit, or ready to build competing constellations of our own.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.