In the AI arms race, VCs are quietly throwing out their own rulebook

February 24, 2026
5 min read
Illustration of two rival AI company logos surrounded by overlapping investor names

In the AI arms race, VCs are quietly throwing out their own rulebook

Investor loyalty was always more marketing slogan than binding oath, but in frontier AI it is evaporating in real time. Some of Silicon Valley’s most prestigious funds are now backing both OpenAI and Anthropic — two companies locked in a battle for the core infrastructure of the next decade. That shift doesn’t just matter to billion‑dollar labs in San Francisco. It reshapes how early‑stage founders everywhere, including in Europe, should think about who they let onto their cap table, what information they share, and how much they can rely on their investors when a real platform war breaks out.


The news in brief

According to TechCrunch, OpenAI is close to finalizing a roughly $100 billion funding round, while rival Anthropic has just closed a massive $30 billion raise. What stands out is the overlap: at least a dozen investors with direct stakes in OpenAI also appeared in Anthropic’s latest round. Names cited include well‑known venture firms such as Founders Fund, Iconiq, Insight Partners and Sequoia Capital, alongside hedge funds and asset managers like D1, Fidelity and TPG.

TechCrunch notes one especially striking example: affiliated funds of BlackRock joined Anthropic’s raise even though a senior BlackRock executive sits on OpenAI’s board. The article also recalls prior reporting that Sam Altman had informally signalled to OpenAI investors that backing certain rivals — including Anthropic, xAI and Safe Superintelligence — could affect the confidential information they receive, a position he later softened.

Not every venture firm is playing both sides. TechCrunch points out that Andreessen Horowitz is in OpenAI but not Anthropic, while Menlo Ventures is in Anthropic but not OpenAI, with several others appearing to have picked just one horse for now.


Why this matters

Venture capital has long sold itself on the idea of loyalty: a partner who joins your board, guides your strategy, helps you recruit and, crucially, goes to war with you against direct competitors. The behaviour emerging around frontier AI labs undermines that story.

When top‑tier firms own stakes in both OpenAI and Anthropic, they stop looking like partners and start looking like index funds on the AI ecosystem. That is rational for their LPs — if you believe both companies could be trillion‑dollar outcomes, why pick just one? — but it dilutes the traditional promise to founders.

The immediate risks are practical, not theoretical. These are private companies that share extremely sensitive information with investors: model roadmaps, safety incidents, proprietary benchmarks, pricing strategies, GPU allocation plans. Even if firms insist on internal “Chinese walls,” the probability of leakage — intentional or accidental — rises sharply when you sit on both sides of the table.

It also changes incentives in subtle ways. An investor who is economically exposed to both OpenAI and Anthropic has a strong motive to avoid actions that could decisively harm either. That can mean less aggressive competition, less willingness to support hard strategic choices (e.g. exclusive partnerships), and more pressure towards industry consensus. Great for systemic stability; not always great for the specific startup that thought it was getting a champion, not a referee.

The clear winners are the mega‑funds that can now justify enormous AI exposure without appearing reckless — they simply own “the sector.” The losers are early‑stage companies and smaller VCs who still play by the old rules and cannot easily diversify. For them, the trust premium and the promise of alignment become the only real differentiators.


The bigger picture

This isn’t happening in a vacuum. Over the last two years, AI has become the most capital‑intensive segment of tech. Model training runs cost hundreds of millions of dollars; hyperscalers are committing tens of billions in data centres and specialized chips. Microsoft’s multi‑layered deals with OpenAI, and Google and Amazon’s hefty bets on Anthropic, signalled that AI infrastructure would be financed more like energy projects than classic software startups.

Once you cross that capital threshold, traditional venture norms come under pressure. In the dot‑com era or even the ride‑sharing wars (Uber vs. Lyft, Didi vs. everyone), VCs typically chose sides. You might see late‑stage cross‑ownership through public markets, but it was rare to see the same early‑stage firms lead rounds in direct, still‑private rivals. In AI, we are watching that taboo erode in early and growth stages simultaneously.

There is also a structural shift in how top firms see themselves. Many have quietly moved from being concentrated, hands‑on company builders to being diversified managers of "exposure" to themes: AI, climate, fintech, defence. The logic is closer to a sophisticated hedge fund than to the 1980s Sand Hill Road partnership model. Backing both OpenAI and Anthropic is simply the most visible expression of this financialization of venture capital.

Compare that with what cloud providers are doing. Microsoft, Google, Amazon and Nvidia are all simultaneously suppliers, investors and customers to multiple AI labs. Their business models are explicitly portfolio‑driven: sell GPUs and cloud capacity to everyone, then capture upside via equity in the likely winners. VCs are increasingly mirroring that playbook, even if it clashes with the old mythology of loyalty and founder alignment.

The long‑term industry consequence is consolidation of power: a small group of capital providers and hyperscalers will hold economic stakes across most of the major foundation‑model players. That may reduce the chance of a single catastrophic loser, but it also blurs competitive lines and makes it harder for regulators — and founders — to understand who is really on which side.


The European and regional angle

For European founders, this shift is both a warning and an opening.

First, the warning: if elite U.S. funds increasingly behave like sector ETFs, their ability — and willingness — to be your exclusive champion against a U.S. or Chinese rival diminishes. A European AI startup that proudly announces Sequoia or Founders Fund on its cap table now has to ask whether that investor is also backing its most dangerous competitor, public or private.

This intersects awkwardly with EU regulation. The Digital Markets Act (DMA) and the upcoming AI Act already place strict obligations on powerful gatekeepers and high‑risk AI systems. If the same U.S. funds quietly own slices of multiple “systemic” AI providers, European regulators may start to look not only at Big Tech’s vertical integration, but also at horizontal financial entanglements in core infrastructure.

On the flip side, Europe has always lagged the U.S. in sheer venture firepower, but it has an advantage in trust and governance. DACH funds, Nordic investors and several pan‑European firms tend to be more conservative about conflicts of interest, partly due to culture, partly due to regulation and LP expectations. That can be turned into a brand: “We don’t back your direct competitors. Period.”

For smaller AI labs in places like Berlin, Paris, Tallinn or Ljubljana, that clear stance could matter more than the last incremental dollar of valuation. When you are competing with OpenAI or Anthropic on a tiny fraction of their budget, you are really selling differentiation on safety, sovereignty or domain expertise. Having investors who are truly on your side — and not also on your rival’s board — is part of that story.


Looking ahead

Expect more dual investing before we see any reversal. As long as frontier labs need tens of billions for GPUs and data centres, the pressure to syndicate rounds widely — and for funds to grab exposure wherever they can — will stay intense.

The interesting changes will be in the fine print. Founders will start to ask sharper questions in partner meetings:

  • Do you invest in direct rivals? Under what conditions?
  • What internal safeguards exist around information sharing?
  • Will you waive certain rights if you later back a competitor?

We are likely to see more bespoke clauses in term sheets: information rights that can be cut off in case of competitive conflicts, restrictions on board seats when a firm holds stakes in multiple category leaders, and even “most‑favoured founder” language around how intros and partnerships are allocated across competing portfolio companies.

Regulators could also enter the picture, especially in Europe. If and when foundation‑model providers are treated as systemic infrastructure — analogous to banks or cloud hyperscalers — cross‑ownership by the same financial institutions may attract antitrust and financial‑stability scrutiny.

Over the next 12–24 months, watch for three signals: whether any top‑tier firm publicly commits to a no‑rival policy as a differentiator; whether a major conflict‑of‑interest scandal emerges from leaked AI lab information; and whether limited partners start pushing back on over‑concentrated AI exposure. Any of those could force VCs to rethink how far they have drifted from their own narrative.


The bottom line

AI is exposing what venture capital has quietly become: a sophisticated risk‑management machine optimised for portfolio returns, not a guild of loyal champions. Backing both OpenAI and Anthropic might be perfectly rational for investors, but it weakens the trust founders can place in them. If you are building on or competing with frontier AI, treat investor alignment as a due‑diligence item, not a marketing slogan — and ask yourself whether you want an index fund on your cap table or a partisan ally when the real battles start.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.