1. Headline & intro
The most eye‑catching AI deal of 2025—Nvidia’s touted $100 billion investment into OpenAI—now looks less like a turning point and more like a mirage. Five months after the announcement, the money hasn’t appeared, the number is being walked back, and OpenAI is actively diversifying away from Nvidia hardware. That’s not just gossip from Silicon Valley; it’s an early stress test of the entire AI boom. In this piece, we’ll unpack what actually happened, why the market overreacted, and what this says about concentration risk, custom silicon, and a maturing (or overheating) AI infrastructure market.
2. The news in brief
According to Ars Technica, Nvidia and OpenAI signed a letter of intent in September 2025 under which Nvidia could invest up to $100 billion in OpenAI’s AI infrastructure. The vision: roughly 10 gigawatts of Nvidia systems, comparable to the output of around 10 nuclear reactors, and on par with Nvidia’s annual GPU shipments.
Five months later, no definitive agreement has closed. Nvidia CEO Jensen Huang has since said the $100 billion figure was never a firm commitment but an upper bound, and that any investment would be staged. As Ars Technica summarizes from Reuters and Wall Street Journal reporting, OpenAI has simultaneously been exploring alternatives: a $10 billion deal with Cerebras, a GPU agreement with AMD, and work with Broadcom on a custom AI chip.
Reuters also reported that OpenAI engineers were dissatisfied with the inference performance of some Nvidia GPUs for products like Codex, prompting the search for low‑latency options. Nvidia’s stock dipped modestly on the back of these stories, even as both companies publicly reiterated that they still value the partnership.
3. Why this matters
The non‑deal exposes a key tension at the heart of the AI boom: everyone wants Nvidia’s margins, but nobody wants Nvidia’s dependence.
In the short term, Nvidia avoids locking itself into an enormous, circular transaction that some investors already viewed skeptically: invest in a customer so they can buy even more of your hardware. That model can turbo‑charge growth on paper but also inflates expectations and raises questions about how much demand is organic versus vendor‑financed.
OpenAI, meanwhile, sends a clear signal to the market: it does not want to be permanently chained to a single GPU supplier, no matter how advanced the chips. The moves toward Cerebras, AMD, and Broadcom suggest a deliberate multi‑vendor, mixed‑architecture strategy aimed at lowering costs, cutting latency, and improving bargaining power.
The losers, at least for now, are momentum traders who were extrapolating endless demand for Nvidia GPUs, and smaller AI infrastructure startups that relied on Nvidia’s “seed you so you can buy from us” model. If the poster child customer is quietly hedging away from that pattern, it becomes harder to justify ultra‑aggressive GPU build‑outs everywhere else.
More subtly, this episode erodes the aura of inevitability around Nvidia’s dominance. The best chips in the world are still constrained by power, latency, and economics—and by customers’ fear of being locked into a single vendor.
4. The bigger picture
This isn’t happening in a vacuum. For a decade, hyperscalers have been learning a clear lesson: if AI is strategic, you eventually design your own silicon.
Google did it early with TPUs, Amazon followed with Inferentia and Trainium, and Meta has been building in‑house AI accelerators. OpenAI, lacking its own cloud, initially had to live entirely inside other people’s hardware decisions—mostly Microsoft’s Azure, powered by Nvidia. The Broadcom custom chip project and Cerebras deal are OpenAI’s belated answer to that structural dependence.
On Nvidia’s side, the company has been using equity investments and guaranteed‑purchase arrangements to create a flywheel: fund AI startups and platforms, book their future GPU orders, then showcase their growth as proof of insatiable demand. Critics have argued this can shade into a synthetic demand loop, reminiscent of past bubbles where financing and consumption blur together.
The apparent fizzling of the $100 billion headline brings that strategy under scrutiny. When the biggest, most credible AI customer on earth won’t fully play along with the circular model at that scale, it implicitly validates concerns that some of the projected demand curves were too optimistic.
At industry level, the story fits a broader trend: shift from “maximum FLOPs” to “usable, efficient FLOPs.” Training mega‑models remains GPU‑heavy, but inference at internet scale cares brutally about latency, energy, and cost per query. That is exactly where specialized accelerators—and eventually custom ASICs—can undercut general‑purpose GPUs. The Nvidia–OpenAI non‑deal is one of the clearest public signs that this transition is no longer theoretical.
5. The European / regional angle
For Europe, this saga underlines a strategic vulnerability: AI sovereignty built on imported silicon is fragile.
European cloud providers and startups are extremely exposed to Nvidia’s pricing and supply cycles. If OpenAI, with Microsoft behind it, feels the need to diversify, European players with far less leverage should probably be even more worried about single‑vendor lock‑in.
Regulators will also take note. The EU’s competition authorities have already shown interest in hyperscaler market power; Nvidia’s pattern of investing in customers that then buy massive GPU volumes could easily become a subject of antitrust curiosity, especially when combined with its near‑monopoly in high‑end AI accelerators.
The episode intersects with the EU AI Act and Digital Markets Act (DMA) in a subtle way: trustworthy AI and fair digital markets are hard to guarantee if the compute layer is both hyper‑concentrated and financially entangled. Expect Brussels to pay closer attention to vertical integration between chip vendors, cloud platforms, and model providers.
For European hardware initiatives—whether that’s SiPearl in HPC, emerging RISC‑V projects, or national AI cloud efforts in Germany, France, and the Nordics—the message is encouraging but sobering. Yes, even OpenAI is looking beyond Nvidia, which validates the idea of alternative architectures. But the bar is high: alternatives must not only exist, they must be cheaper, lower‑latency, and wrapped in a mature software ecosystem.
6. Looking ahead
The most likely outcome is not a dramatic breakup, but a messy coexistence.
Nvidia will remain OpenAI’s primary training partner for frontier models; the CUDA ecosystem, developer tooling, and sheer performance are still unmatched. At the same time, more of OpenAI’s inference workloads—especially latency‑sensitive products—will migrate to Cerebras, AMD, and future custom ASICs, gradually eroding Nvidia’s share of the most price‑sensitive part of the stack.
Investors should watch three things over the next 12–24 months:
- Capex disclosures from Microsoft and other hyperscalers: do they tilt more toward diversified or in‑house silicon?
- Nvidia’s margin profile: any sustained compression would suggest customers are finally winning some pricing battles.
- Regulatory signals in the US and EU around AI infrastructure concentration and vendor‑financed demand.
For OpenAI, the big open question is execution risk. Running a mixed fleet of accelerators from multiple vendors is hard engineering work: scheduling, frameworks, and reliability become far more complex. If they pull it off, they gain structural cost and negotiation advantages. If they stumble, they risk outages and slower product cycles.
The broader risk is that too much speculative capacity—funded by optimistic projections and circular deals—ends up under‑utilised if AI monetisation lags. In that scenario, today’s GPU gold rush could start to look uncomfortably like the overbuild of data centres in the early 2000s.
7. The bottom line
The vanished $100 billion isn’t just a missed headline; it’s a reality check on the AI infrastructure story. Nvidia is still the king of training silicon, but the illusion of infinite, vendor‑financed demand is cracking, and OpenAI is behaving like a mature platform hedging against single‑supplier risk. For European and global players alike, the lesson is simple: in AI, control over compute is strategy. The question is who, beyond Nvidia, will actually earn that control.


