Google’s Thinking Machines deal shows compute, not models, is the new AI chokepoint

April 22, 2026
5 min read
Illustration of a Google Cloud data center linked to an advanced AI research lab

Thinking Machines Lab signing a multi‑billion‑dollar deal with Google Cloud is not just another GPU contract. It is a signal that the real power in AI is shifting from who has the cleverest model to who controls the densest, most reliable compute. For European and global startups dreaming of building the next OpenAI, this is the new gravity well.

In this piece, we will unpack what TechCrunch reported, why Google is so eager to court a still‑secretive lab, what this says about the future of cloud and chips, and how this deepens the dependence of cutting‑edge AI on a tiny club of US infrastructure giants.

The news in brief

According to TechCrunch, Thinking Machines Lab, founded in 2025 by former OpenAI chief technologist Mira Murati, has signed a new multi‑billion‑dollar agreement with Google Cloud. A source cited by TechCrunch says the deal is worth a single‑digit number of billions of dollars.

The agreement gives Thinking Machines access to Google Cloud infrastructure powered by Nvidia’s latest GB300 GPUs, which Google says double training and serving speed compared to its previous generation. Beyond raw chips, the package reportedly bundles Google services like storage, Kubernetes Engine and the Spanner database, and is tailored to support reinforcement learning workloads.

Thinking Machines previously partnered with Nvidia in a deal that included investment from the chipmaker; this is its first agreement with a major cloud provider and it is non‑exclusive. The lab’s first product, Tinker, launched in October 2025 as a tool that automates the creation of custom frontier‑level AI models.

The deal also follows recent capacity agreements between Anthropic and both Google and Amazon for massive amounts of AI compute and energy.

Why this matters

This agreement is strategically important for three reasons: it locks in a fast‑rising lab, it showcases Google’s answer to Microsoft–OpenAI, and it underlines that compute, not algorithms, is the scarce resource in AI.

For Google, Thinking Machines is an ideal anchor tenant. It is led by a high‑profile ex‑OpenAI executive, focused on frontier‑scale research, and its flagship product, Tinker, effectively tries to industrialise model building. A tool that automates the creation of custom frontier models is a guaranteed compute furnace. Whoever hosts that furnace has long‑term leverage.

The deal is officially non‑exclusive, but the gravitational pull of cloud is real. Once a lab standardises its pipelines on Google’s stack – from Kubernetes orchestration and networking to data storage and experiment tracking – switching significant workloads to a rival is painful, even before you factor in discount structures and early access to hardware.

The other winner is Nvidia. Whether Anthropic goes to Amazon, Thinking Machines to Google, or OpenAI to Microsoft, the green logo sits underneath almost everything. Google may be pushing its own TPUs with partners like Anthropic, but the Thinking Machines deal shows that when labs demand the absolute bleeding edge, Nvidia still sets the pace.

The losers are smaller clouds and open ecosystems. When the most advanced labs are bound to multi‑billion‑dollar, multi‑gigawatt contracts with three US hyperscalers, the space for independent infrastructure shrinks. It becomes harder for a European cloud provider, a national supercomputing centre, or even a well‑funded startup to attract frontier‑scale workloads.

The bigger picture

This deal sits in a pattern that has become clear over the past three years: hyperscalers are racing to sign quasi‑utility contracts with leading AI labs. Microsoft’s long‑term agreements with OpenAI, Anthropic’s dual arrangements with Google and Amazon, and now Thinking Machines’ move to Google all point in the same direction.

Cloud providers are no longer just hosting apps; they are acting like power companies for AI. They commit capacity, energy, cooling, and specialised chips years ahead, and in return they get predictable, enormous demand and a flagship customer that attracts others.

At the same time, the nature of the workloads is shifting. TechCrunch notes that Google’s release highlights reinforcement learning as a core focus for Thinking Machines. That matters. Reinforcement learning has historically powered breakthroughs such as DeepMind’s game‑playing systems, but at frontier scale it becomes brutally expensive. Training agents that explore huge state spaces or meta‑systems that design other models pushes compute use into yet another gear.

Tinker sits at this crossroads. If it really can automate the construction of frontier models, then we are entering an era where models help design the next generation of models. That tightens the feedback loop between algorithmic innovation and industrial‑scale compute. A small research idea can, in a few months, become a pipeline burning through billions of GPU hours.

Competitively, Google is signalling that it will not concede the frontier‑lab segment to Microsoft and Amazon. While Google has strong in‑house research via DeepMind and Google Research, partnerships like this provide external validation of its cloud offering and hedge against any future misalignment between its own AI roadmap and customer needs.

The European and regional angle

From a European standpoint, this deal is another reminder that the most advanced AI labs are structurally dependent on US infrastructure. Thinking Machines is not European, but the pattern matters: when every new frontier lab signs with Google, Microsoft or Amazon, the chance of an independent European compute stack growing to global relevance diminishes.

This collides directly with EU ambitions around digital and AI sovereignty. The EU AI Act will regulate high‑risk and general‑purpose AI systems, but it does not change the fact that the physical compute sits mostly in data centres controlled by US corporations, often running Nvidia hardware and proprietary orchestration software.

European cloud providers – OVHcloud, Scaleway, Deutsche Telekom’s T‑Systems, or Nordic and Central European EuroHPC supercomputing centres – risk being locked into the role of regional specialists and compliance hosts, rather than homes for globally leading labs. Even the EuroHPC initiative, with systems like LUMI and Vega, struggles to match the agility and scale of hyperscalers signing bespoke, multi‑billion‑dollar contracts.

Regulation offers one lever. The EU Data Act already introduces rules to make cloud switching easier and curb lock‑in. The Digital Markets Act and competition authorities could, in theory, scrutinise bundled deals where access to frontier chips is conditional on using a full cloud stack. But regulators are moving slower than the market.

For European startups using tools like Tinker, this concentration creates both opportunity and risk. On one hand, it becomes easier to spin up formidable models on Google Cloud without building infrastructure from scratch. On the other, dependence on a small set of US providers can complicate data residency, GDPR compliance, and long‑term bargaining power.

Looking ahead

Expect more of these deals, not fewer. Any lab that aims to train frontier‑scale models will either sign with one of the big three clouds, join a national or regional supercomputing initiative, or quietly accept that it will never operate at the bleeding edge.

For Google and Thinking Machines, the next 12 to 24 months will likely be about turning this capacity into differentiated products. If Tinker is successful, we should see a wave of domain‑specific frontier models built by enterprises rather than labs: models tuned for finance, pharma, logistics, or national‑language tasks. Google will be keen to position its cloud as the default home for those workloads.

Watch for three things.

First, the degree of multi‑cloud reality. The deal is nominally non‑exclusive. If Thinking Machines later announces big training runs on another provider, that will be a sign that even frontier labs can avoid total lock‑in.

Second, regulatory attention. As AI capacity starts to look like critical infrastructure, especially in areas like healthcare or public services, EU and national regulators may begin to treat long‑term compute contracts the way they treat energy or telecom agreements.

Third, the energy and sustainability angle. Multi‑gigawatt AI commitments, like those mentioned around Anthropic, raise hard questions about grid capacity and climate targets. Europe is more aggressive on climate policy than the US; that may shape where and how large AI clusters are located.

For European readers, the practical takeaway is straightforward: if your strategy depends on cutting‑edge AI, your true dependency is on compute. Governance, procurement and risk management need to treat cloud capacity like a strategic resource, not a mere IT line item.

The bottom line

Google’s deal with Thinking Machines confirms that the scarce asset in AI is no longer ideas but infrastructure. A small club of US giants, plus Nvidia, sits at the heart of that system. For Europe, the question is no longer whether to regulate AI models, but whether it can build or at least meaningfully influence the compute stack beneath them. If your organisation is betting on AI, the uncomfortable question is simple: who really owns the power behind your models?

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.