1. Headline & intro
Anthropic is no longer just an AI lab; it is starting to look like a power-hungry industrial giant. Its new multi‑gigawatt compute deal with Google and Broadcom says more about the future of AI than any model demo: AI’s bottleneck is now electricity and custom silicon, not algorithms. In this piece we’ll unpack what this 3.5 GW bet really means – for Anthropic, for Google’s cloud and chip ambitions, for competitors that can’t buy power at this scale, and for regulators who suddenly find that “AI policy” is also energy, competition and infrastructure policy.
2. The news in brief
According to TechCrunch, Anthropic has signed an expanded infrastructure agreement with Google and Broadcom to massively increase the compute available for its Claude models.
The deal extends Anthropic’s use of Google Cloud’s TPU (tensor processing unit) chips, building on an earlier October 2025 agreement that already covered more than 1 gigawatt of compute capacity. A recent Broadcom filing with the U.S. Securities and Exchange Commission indicates the new arrangement covers around 3.5 gigawatts of compute-related capacity, most of it located in the United States. Anthropic notes this is part of its previously announced commitment to invest about $50 billion into U.S. compute infrastructure.
The additional capacity is expected to come online in 2027. TechCrunch reports that Anthropic’s annualised revenue run rate has risen to $30 billion, up sharply from $9 billion at the end of 2025, with more than 1,000 enterprise customers each spending over $1 million per year. The company recently closed a Series G funding round of $30 billion at a valuation of $380 billion, even as the U.S. Department of Defense has classified Anthropic as a supply‑chain risk.
3. Why this matters
Anthropic is effectively pre‑buying its future – in electricity and in chips.
For Anthropic, the upside is obvious. Committing to multi‑gigawatt capacity locks in scarce AI accelerators and data centre build‑out at a time when everyone, from Big Tech to hedge funds, is scrambling for the same hardware and power. It signals to customers and investors that Anthropic intends to stay at the “frontier model” race, not just participate in the broader AI software market.
Google and Broadcom are also clear winners. For Google Cloud, this is both a marquee customer win and a validation of its strategy to compete with Nvidia using homegrown TPUs. It helps Google fill giant new data centers with a captive, high‑margin workload and makes Anthropic deeply dependent on Google’s infrastructure. For Broadcom, whose custom ASIC and networking divisions sit behind many of these systems, it is another long‑term revenue stream that justifies its own data‑center‑oriented R&D.
The losers are more subtle. Smaller AI labs and independent cloud providers simply cannot secure this kind of power and silicon at comparable prices. As AI training and inference consolidate onto a few hyperscalers with bespoke chips, the barrier to entry for competing at the top end of the model spectrum rises dramatically.
There is also a policy downside: resilience and concentration risk. When one private company signs up for power and compute measured in multiple gigawatts, it begins to compete with cities and critical infrastructure for grid capacity. And when that capacity is tightly coupled with a single cloud provider’s proprietary stack, regulators will eventually ask whether we are comfortable with a handful of vertically integrated “AI‑utilities” controlling most frontier‑grade compute.
4. The bigger picture
This deal fits into a broader realignment of the AI industry around massive, long‑term compute contracts.
Microsoft’s multi‑year, multi‑billion‑dollar commitments to OpenAI – backed by Nvidia GPUs and now custom Azure chips – set the template: secure vast compute, tie a leading lab to your cloud, and use that as a wedge to win enterprise workloads. Amazon has followed a similar pattern with Anthropic itself, while Meta is pouring capital into its own in‑house infrastructure for open‑weight models.
Anthropic’s new agreement with Google and Broadcom turns the dial further. The unit of competition is no longer “number of GPUs,” but gigawatts of data center capacity. At that scale, AI starts to look less like software and more like steel or chemicals: capital intensive, power hungry, and dominated by a small oligopoly with access to supply chains and balance sheets that others cannot match.
Historically, we have seen similar patterns in telecoms (3G/4G spectrum auctions and nationwide rollouts) and cloud computing (the emergence of AWS, Azure and Google Cloud). The early stage is messy and competitive; then massive fixed costs, network effects and regulatory complexity cement a few dominant players. Anthropic’s 3.5 GW bet accelerates AI along that path.
It also underscores a strategic shift away from Nvidia’s general‑purpose GPUs toward vertically integrated stacks. Google’s TPUs, Broadcom’s custom silicon and specialised networking give Anthropic a path to frontier‑level performance without relying solely on the oversubscribed GPU market. If this works, more labs may decide that the only viable route to the top tier is to align with one hyperscaler’s proprietary chips and software ecosystem.
5. The European / regional angle
From a European perspective, the most important part of this story is not Anthropic’s revenue figure, but where the 3.5 GW will sit: overwhelmingly in the United States.
Europe talks a lot about “digital sovereignty” and strategic autonomy in AI, yet the frontier infrastructure is being built mainly in U.S. jurisdictions, on U.S. chips, under U.S. export rules. European enterprises that adopt Claude at scale will depend indirectly on American data centers and supply chains that Brussels does not control.
At the same time, EU regulation will absolutely shape how this capacity can be used for European customers. GDPR still governs what data can flow into model training and inference. The Digital Services Act and Digital Markets Act constrain how the largest platforms can bundle AI services. And the EU AI Act – with its risk‑based requirements and obligations for “high‑risk” and foundation models – means that offering Claude in Europe will not just be a matter of flicking a switch in a U.S. region.
This creates both risks and opportunities for European cloud and chip players. On the one hand, the sheer scale of Anthropic’s deal makes it almost impossible for a European‑only player to match frontier‑model capacity. On the other, strict EU rules on data locality, energy efficiency and transparency could make smaller, regional AI infrastructure – powered by renewables in the Nordics, nuclear in France, or cross‑border grids in Central Europe – more attractive for certain use cases.
For European policymakers, Anthropic’s move is a reminder that AI industrial policy cannot be separated from energy policy and competition law. The question is no longer just “who builds the best models?” but “who controls the gigawatts those models require?”.
6. Looking ahead
Several things are worth watching over the next 12–24 months.
First, product velocity. Anthropic has now pre‑paid for the ability to train and serve much larger or more numerous models. If Claude’s capabilities and reliability do not improve noticeably, enterprises will question whether sheer scale is translating into business value – and regulators will question whether these power demands are justified.
Second, market structure. With Anthropic deeply tied into Google’s TPU‑based stack, the industry is edging toward a world where each frontier lab is effectively “anchored” to one hyperscaler: OpenAI to Microsoft, Anthropic increasingly to Google, and others aligning with Amazon or Oracle. That raises interoperability and lock‑in issues for customers who do not want to bet their entire AI strategy on a single cloud vendor.
Third, policy reaction. The fact that the U.S. Department of Defense already flags Anthropic as a supply‑chain risk shows that governments are paying attention. As AI labs accumulate multi‑gigawatt footprints, we should expect more scrutiny around national security, resilience, labour conditions in data‑center construction and, critically, environmental impact.
Finally, competition from alternative models. Even as frontier labs scale up, open‑weight and smaller domain‑specific models are getting better and cheaper. If many enterprise workloads can be served by more modest systems running on local or regional infrastructure, the economic justification for ever larger, ever more power‑hungry models may come under pressure.
The most likely outcome is a bifurcated market: a handful of giant, heavily regulated “AI utilities” at the top, and a vibrant ecosystem of specialised and open‑source models running on more modest infrastructure underneath.
7. The bottom line
Anthropic’s expanded 3.5 GW deal with Google and Broadcom is less about one company’s growth and more about the industrialisation of AI. Compute at this scale turns labs into power players in a very literal sense, reshaping cloud competition, energy grids and regulatory debates. The question for policymakers, enterprises and citizens – in Europe and beyond – is whether we are comfortable with AI’s future being built like a utility sector, dominated by a small club of firms that own the gigawatts. If not, now is the time to act, not after the concrete has set and the TPUs are humming.



