Nvidia, OpenAI and the $100B Illusion: What Jensen Huang Isn’t Saying
Nvidia’s CEO Jensen Huang is publicly brushing off talk of friction with OpenAI and insisting the company will “definitely participate” in its next funding round. But when a supposed $100 billion package quietly shrinks to “tens of billions,” something important is happening behind the scenes. This isn’t just about one relationship going sour; it’s about who will own the most critical resource of this decade: AI compute. In this piece, we’ll unpack what’s performance, what’s negotiation, and what this all really means for the balance of power in AI.
The news in brief
According to TechCrunch, Huang was asked in Taipei about a Wall Street Journal report claiming Nvidia was rethinking its massive OpenAI deal. In September, Nvidia and OpenAI announced a nonbinding plan under which Nvidia could invest up to $100 billion and help build 10 GW of computing infrastructure for OpenAI.
The Journal reported that Huang has recently been stressing the nonbinding nature of that agreement, while privately questioning OpenAI’s business strategy and worrying about rivals like Anthropic and Google. The paper also said the two companies are exploring a smaller-scale relationship, potentially an equity investment in the “tens of billions” rather than $100 billion.
TechCrunch notes that an OpenAI spokesperson told the WSJ the companies are actively working through partnership details and that Nvidia remains central to its systems. Bloomberg, meanwhile, reported Huang’s public response: Nvidia will “definitely participate” in OpenAI’s new funding round and invest a “great deal of money,” though he declined to say how much. The New York Times has separately reported that Nvidia, Amazon, Microsoft and SoftBank are all discussing possible investments in OpenAI’s planned $100 billion raise.
Why this matters
Strip away the diplomatic language and you see a simple reality: the era of AI where chipmakers were just suppliers is over. Nvidia is morphing into a capital allocator and, in some cases, a co-strategist for the biggest AI labs. That fundamentally changes incentives across the stack.
If Nvidia had gone anywhere near the originally floated $100 billion plus 10 GW of dedicated infrastructure, it would have effectively tied a huge chunk of its future capacity, risk and political capital to one customer: OpenAI. That might thrill OpenAI and Microsoft, but it would alarm every other hyperscaler, startup and sovereign buyer currently begging for Nvidia GPUs.
By publicly reiterating confidence in OpenAI while quietly signalling that the deal is nonbinding and likely smaller, Huang is doing three things at once:
- Maintaining negotiating leverage with OpenAI on valuation, governance and long-term supply commitments.
- Reassuring other customers that Nvidia won’t become “OpenAI’s captive foundry,” which could distort pricing and access.
- Keeping regulators calmer at a moment when concentration of compute and AI power is already under scrutiny.
The winners, at least in the near term, are Nvidia and OpenAI’s competitors. Anthropic, Google, Meta and a growing ecosystem of open‑source players benefit from Nvidia remaining relatively neutral rather than locked into one mega‑deal. The losers are anyone who hoped that a $100 billion injection would rapidly de-risk OpenAI’s infrastructure bottlenecks and give it an unassailable lead.
This episode also exposes how fragile headline AI numbers have become. “$100 billion” sounds like dominance. “Tens of billions over time, subject to conditions” looks more like a sophisticated supply and equity agreement. The market is starting to learn the difference.
The bigger picture
This story sits at the intersection of three trends.
1. The financialization of AI infrastructure.
Training frontier models now regularly approaches or exceeds several billion dollars in compute costs. OpenAI’s reported $100 billion target, first mentioned by the WSJ in December, is less a vanity number and more a signal of the capital intensity of the next wave of AI. Infrastructure is no longer just a capex line item for cloud providers; it is becoming an asset class in its own right, attracting sovereign wealth, infrastructure funds and now chipmakers.
We’ve seen hints of this with Microsoft’s deep infrastructure commitments to OpenAI, Amazon and Google’s strategic investments in Anthropic, and SoftBank’s interest in building or funding AI datacenters. Nvidia entering that game as an equity investor blurs the line between vendor and platform.
2. Vertical integration vs. ecosystem trust.
Historically, dominant chip companies that tried to move too far up the stack triggered ecosystem backlash. Intel’s attempts to bundle software and services created unease among OEMs who feared becoming second-tier. Nvidia is walking that tightrope now: it wants exposure to the upside of frontier AI, but it can’t be seen to pick one winner.
That’s why this negotiation matters more than its final dollar figure. A full-throated $100 billion commitment would have been a declaration that Nvidia is betting its future on OpenAI. A more modest, staged investment keeps the door open to similar arrangements with others — or to playing the role of “Switzerland of compute,” the supplier everyone needs but no one owns.
3. A maturing AI funding cycle.
Early in the generative AI boom, investors rewarded outsized narratives: trillion‑dollar TAMs, quasi‑infinite demand for GPUs, and valuations that assumed permanent dominance. The pushback around this Nvidia–OpenAI deal suggests we’re moving into a more disciplined phase.
Nvidia knows that overcommitting to one player just as AMD, custom ASICs and even in‑house chips from cloud providers are improving could age badly. OpenAI, on the other hand, needs to secure multi‑year compute at predictable prices without handing too much strategic control to any single supplier. Both sides are discovering that $100 billion headlines are easy; sustainable partnerships are hard.
The European and regional angle
For Europe, this is a reminder of its awkward position in the AI race: heavily exposed to decisions taken in California and Taipei, with limited influence over the underlying hardware stack.
The EU AI Act may soon set global benchmarks for how frontier models are governed, but when it comes to who controls the GPUs and datacenters, the real decisions are being made by Nvidia, OpenAI, Microsoft, Amazon and a handful of Asian manufacturers. If Nvidia had essentially “pre‑sold” 10 GW of capacity to OpenAI, European cloud providers and research institutions would have faced an even tougher scramble for high‑end GPUs.
Brussels is already probing cloud concentration and has signalled concerns about hyperscalers bundling infrastructure, AI services and data. A tight, exclusive Nvidia–OpenAI tie‑up would have raised new questions under EU competition law and perhaps even the Digital Markets Act, especially given Microsoft’s existing role in OpenAI’s stack.
Instead, a more flexible, multi‑partner Nvidia strategy arguably benefits European buyers. It preserves at least the possibility of large allocations going to regional clouds, national HPC centres and industrial players building their own AI infrastructure. The challenge is that Europe still lacks a true alternative at the chip level; efforts like European‑backed RISC‑V initiatives and local accelerators are promising but years behind.
For EU‑based startups and enterprises, the signal is clear: access to compute will remain a strategic bottleneck, and depending on a single US lab or cloud for critical AI capabilities is risky. Expect more European companies to explore multi‑cloud, hybrid and even on‑prem regimes, partly as a hedge against exactly the kind of political and financial turbulence we now see between Nvidia and OpenAI.
Looking ahead
What happens next will be less dramatic than the original headline — but more consequential.
The most likely outcome is a phased, multi‑year agreement where Nvidia:
- takes a minority equity stake in OpenAI,
- commits to supply a significant, but not exclusive, volume of GPUs and systems,
- and possibly co‑finances specific datacenter builds rather than a monolithic 10 GW package.
In parallel, Nvidia will continue deep partnerships with other labs and cloud providers, quietly ensuring that no single customer can credibly threaten to walk away and build a full stack elsewhere.
For OpenAI, the crucial questions are governance and optionality. How much board influence, if any, will Nvidia seek? Will long‑term supply guarantees lock OpenAI into one hardware roadmap just as competition in AI accelerators heats up? And how will Microsoft — both OpenAI’s main cloud and a potential rival investor — react if Nvidia’s role grows?
Timeline‑wise, expect more concrete numbers to emerge when OpenAI formally announces its new funding round and partner mix. Until then, both sides have reasons to keep things vague: Huang wants to project confidence without overcommitting; OpenAI wants to maintain negotiating leverage with all interested investors.
The biggest risk is that political, regulatory or market shocks — from export controls to a sharp slowdown in AI demand — hit before the ink is dry. The biggest opportunity, especially for outsiders, is that a less exclusive Nvidia–OpenAI pact leaves space for alternative alliances: AMD with a major cloud, a European sovereign AI stack, or an unexpected open‑source‑first player with serious capital behind it.
The bottom line
The walk‑back from a splashy $100 billion vision to a more measured “great deal of money” doesn’t mean Nvidia is souring on OpenAI; it means both sides are waking up to the real constraints of the AI economy. Nvidia wants upside without capture, OpenAI wants security without dependence, and the rest of the market wants a supplier that doesn’t play favourites. The outcome of this negotiation won’t just shape one deal — it will signal how power over AI compute will be shared, or hoarded, in the years ahead. The open question is: will regulators and rivals accept Nvidia as both the casino and a player at the table?



