Tokenmaxxing and the AI Bubble: When More Compute Isn’t a Strategy
Silicon Valley has a new obsession: tokenmaxxing — pushing ever more tokens through ever larger models, as if scale alone guarantees progress. According to TechCrunch’s Equity podcast, we’re now at the point where a shoe brand is rebranding as an AI infrastructure company, OpenAI is buying everything from personal finance apps to media, and Anthropic is privately demoing a model it considers too risky for broad release. This isn’t just colourful startup drama. It’s a test of whether the AI boom is building durable infrastructure or just inflating the next great tech bubble.
In this piece, we’ll unpack what this week’s AI headlines really say about power, capital allocation, and who will be left holding the bag.
The news in brief
As reported by TechCrunch’s Equity podcast, the AI industry spent the week providing fresh evidence of just how extreme the current cycle has become.
The hosts describe a widening gap between AI insiders and everyone else, not only in spending and expectations, but even in vocabulary — with terms like “tokenmaxxing” capturing the drive to push more and more workload through foundation models.
According to TechCrunch, OpenAI is on an acquisition spree, picking up assets ranging from an AI personal finance startup (Hiro, detailed in a separate TechCrunch report) to talk show formats and other media properties. Meanwhile, lifestyle brand Allbirds is said to be selling off its core shoe business and repositioning itself as an AI infrastructure play.
TechCrunch also notes that Anthropic has unveiled a new model it considers too powerful or sensitive for general public release, while still choosing to demonstrate it privately to high‑level policymakers, including U.S. Federal Reserve Chair Jerome Powell. The Equity episode uses these stories as a lens on the emerging AI infrastructure stack and the intensifying enterprise battle between OpenAI and Anthropic.
Why this matters
The through-line in all of this is simple: we are pouring staggering amounts of capital, talent, and political attention into AI — but the incentives are skewed toward scale first, value later.
Tokenmaxxing is not just a meme. It’s a mindset in which success is measured by:
- how many tokens a model processes,
- how many GPUs a company controls,
- how many exclusive data sources or distribution channels it can hoover up.
OpenAI’s moves into consumer finance and media, as highlighted by TechCrunch, tell you where the new moat is supposed to come from: own the user, own the data, own the narrative. A personal finance app is not just a product; it’s a structured stream of sensitive, high‑value data. A talk show isn’t just entertainment; it’s a distribution channel for normalising AI agents as daily companions.
Allbirds’ pivot is the other side of the same coin. When a consumer brand decides that the better story for investors is “AI infrastructure company” rather than “shoe maker with a turnaround plan”, you know labels are starting to matter more than business models. This is exactly how previous bubbles behaved: add “.com” in 1999, “blockchain” in 2017, “metaverse” in 2021 — now it’s “AI infrastructure” in 2026.
Anthropic’s decision to portray a new model as too dangerous for the public, while selectively demoing it to central bankers, underlines a different kind of power play. Safety framing is real and important, but it also functions as reputation management and influence building. If only a small club of insiders and regulators get to see the cutting edge, the rest of the ecosystem — including open‑source projects and smaller startups — is structurally disadvantaged.
The immediate implication: we are concentrating power in a handful of labs and platforms, without a clear, evidence‑based view of whether all this tokenmaxxing is translating into broad, shared productivity gains.
The bigger picture
Put this week’s stories next to the last few years of AI news, and a pattern emerges.
Scale as ideology.
For years, the leading labs have argued that bigger models plus more data plus more compute lead almost automatically to better capabilities. That thesis worked astonishingly well from GPT‑2 to GPT‑4 and the latest Claude models. But we’re hitting diminishing returns: training costs are exploding, energy and water use are becoming political issues, and many real‑world use cases don’t benefit meaningfully from yet another order of magnitude of parameters.Infrastructure as the new rent-seeking layer.
Whoever controls AI infrastructure — from proprietary models to data centres, chips, and even AI‑optimised power — can extract recurring rents from everyone else. That’s why every company suddenly wants to be an “AI infrastructure” company. It’s not that Allbirds will run hyperscale data centres; it’s that the story of being in the picks‑and‑shovels business is now more valuable than selling picks or shovels.Data and distribution as the real moats.
The OpenAI acquisitions TechCrunch mentions are part of a broader scramble to lock down proprietary data and distribution:- Financial data for autonomous agents that manage money;
- Media formats for synthetic personalities and AI‑driven content;
- Productivity platforms for embedded copilots.
This mirrors historic tech shifts: Microsoft tying Office to Windows, Google owning both search and ads, Meta integrating social graphs across apps. AI just raises the stakes because the same models can mediate everything from work to healthcare to politics.
Safety as marketing and regulatory arbitrage.
When Anthropic or any other lab describes a system as too powerful for the public, they are often doing three things at once: honestly flagging risks, differentiating their product (“ours is more advanced than what you can try”), and shaping the regulatory narrative in their favour (“we are the responsible adults, let us set the rules”). The selective demo to Jerome Powell, as reported by TechCrunch, is part of this influence choreography.
The net effect is a familiar Silicon Valley dynamic: massive hype and infrastructure build‑out ahead of clear business fundamentals. Nvidia and the major cloud providers are the obvious early winners. Whether everyone else will see a net benefit depends on whether the industry can evolve from tokenmaxxing to outcome‑maxxing — optimising for measurable value rather than raw usage.
The European angle
For European users, companies, and policymakers, this isn’t a distant American spectacle. It directly collides with core European priorities: regulation, sovereignty, and sustainability.
First, the EU AI Act, alongside the Digital Services Act (DSA) and Digital Markets Act (DMA), is explicitly designed to avoid exactly the kind of opaque concentration of power we’re now seeing in U.S. AI labs. If critical models are demoed only to select regulators and central bankers in Washington, while remaining black boxes for everyone else, Europe risks importing systems it cannot meaningfully scrutinise or contest.
Second, the energy and infrastructure footprint of tokenmaxxing lands on European soil as well. Data‑centre clusters in Ireland, the Nordics, the Netherlands, and Central Europe are already pressuring power grids and water resources. An arms race for ever larger models trained in U.S. clouds still requires European land, permits, and public acceptance for the inference infrastructure.
Third, Europe has an opportunity — and a dilemma — around its own AI stack. On the one hand, European players like Mistral, Aleph Alpha, Stability AI’s European operations, and various national initiatives show that the continent is not condemned to be purely a customer of U.S. models. On the other hand, if the market narrative becomes “only the largest U.S. labs have truly state‑of‑the‑art models,” many European enterprises will default to buying into that story, regardless of whether they actually need that level of capability.
For European CIOs, the key question is no longer “Which model is the absolute best on benchmarks?” but “Which model and provider align with our regulatory obligations, data‑protection culture, and long‑term bargaining power?” In a world of tokenmaxxing, discipline — in procurement, governance, and energy use — might be Europe’s biggest competitive advantage.
Looking ahead
Where does this all go over the next 12–24 months?
A correction in AI-washing.
Not every company that rebrands as an AI infrastructure play will survive first contact with cash flow reality. Expect a wave of quiet down‑rounds, restructurings, and failures among firms whose AI story was mostly narrative arbitrage. Public markets, in particular, will become less forgiving once interest rates and energy prices are fully priced into AI economics.Fewer, larger model providers — plus a long tail.
The capital requirements of frontier models almost guarantee consolidation at the top: a handful of global labs (OpenAI, Anthropic, Google DeepMind, perhaps one or two more) plus a strong open‑source ecosystem optimised for specific domains and languages. Europe has a shot at leading in the latter, especially for regulated sectors where transparency and localisation matter more than absolute cutting‑edge performance.Regulators will demand more than private demos.
The spectacle of central bankers getting bespoke access to unreleased models while the public is told they are too dangerous will not age well. Expect tougher demands for independent evaluations, mandatory documentation of training data and energy use, and clearer liability rules under the EU AI Act and similar frameworks elsewhere.Shift from token counts to ROI.
Boards and CFOs will start asking boring but decisive questions: How many hours saved? How much revenue generated? What risks introduced? Tokenmaxxing will be tolerated only where it demonstrably yields productivity gains or new capabilities, not just nicer demos.
The unresolved question is whether the current frenzy will leave behind robust, shared infrastructure — like cloud computing did after the dot‑com bust — or whether it will crowd out more diverse, decentralised innovation paths in favour of a few giant labs.
The bottom line
The stories TechCrunch surfaces this week are not just curiosities about shoes, talk shows, and secretive models. They’re early warning signs of an AI economy where scale, narrative, and regulatory access risk becoming substitutes for real innovation. Tokenmaxxing can be a useful phase in exploring the frontier — but if we mistake it for a strategy, we’ll end up with oversized data centres and undersized benefits.
The urgent question for enterprises, regulators, and users — especially in Europe — is simple: Are we building AI infrastructure that serves society, or just society‑sized infrastructure that serves a few labs?



