1. Headline & intro
Tokenmaxxing. AI infrastructure. Models "too powerful" for the public. OpenAI buying everything from finance apps to media IP. If you feel like AI insiders live on a different planet, you are not imagining it. A new compute aristocracy is emerging: a handful of labs, chipmakers and hyperscalers setting the pace while everyone else scrambles to understand the vocabulary.
In this piece, we unpack the themes highlighted on TechCrunch’s Equity podcast – OpenAI’s shopping spree, Anthropic’s caution, Meta’s tokenmaxxing culture and the widening AI Anxiety Gap – and ask what they really mean for power, regulation and opportunity in the next phase of the AI race.
2. The news in brief
According to TechCrunch’s Equity podcast, the past week crystallised several fault lines in the AI industry.
OpenAI continued its acquisition streak, snapping up an AI personal finance startup (Hiro) and reportedly exploring media and talk‑show style content deals, signalling ambitions well beyond being just an API provider. In parallel, lifestyle brand Allbirds sold its shoe business and rebranded as an AI infrastructure company, trying to ride the valuation wave around data centres and model hosting.
Anthropic, meanwhile, unveiled a new frontier‑level model that it described as too risky to release broadly, even as it privately demoed the system to US Federal Reserve Chair Jerome Powell. On the infrastructure side, chipmakers AMD, Arm and Qualcomm invested around $60 million into UK self‑driving startup Wayve, while Uber floated a milestone‑based $300 million offer tied to autonomous driving.
TechCrunch also highlighted data‑centre startup Fluidstack and a reported $50 billion agreement with Anthropic, the rise of Anthropic’s developer tool Claude Code at the HumanX conference, and Meta’s internal “tokenmaxxing” leaderboard, alongside a Stanford report on the growing disconnect between AI insiders and everyone else.
3. Why this matters
What ties these headlines together is not just AI hype – it’s consolidation of power.
OpenAI’s acquisition push shows a company racing to own distribution and workflows, not only the core model. Buying a personal finance app is about more than a neat feature: it’s about securing proprietary user data, habit‑forming interfaces and a defensible use case that can’t be easily replicated by open‑source alternatives. Every such deal makes it harder for smaller players to compete, even if they have strong models.
Anthropic’s “too powerful to release” positioning reveals another axis of power: access control. If only regulators, central bankers and a few corporations get hands‑on time with the most capable systems, we are drifting toward a gated AI regime where capability is stratified by capital and political influence. That might be safer in some narrow sense, but it also entrenches incumbents.
Tokenmaxxing, as reported from Meta’s internal culture, is a different kind of red flag. When employees are ranked by how many tokens their models process, you get a perfect storm of perverse incentives: more prompts, more synthetic content, more cost – not necessarily more value. We’ve seen this movie before with monthly active users, click‑through rates and watch time. Whenever a single metric becomes a religion, quality and trust pay the price.
Winners in this phase are clear: GPU vendors, data‑centre operators, and the frontier labs securing massive, multi‑year compute deals like Anthropic’s reported $50 billion arrangement with Fluidstack. The losers are cloud‑dependent startups, open‑source projects without distribution, and enterprises that bet too early on a single vendor lock‑in story.
The AI Anxiety Gap – insiders euphoric, the public uneasy – is not just a psychological curiosity. It is a political risk. If people feel AI is imposed on them by secretive elites chasing token counts and stock options, the backlash will not be subtle.
4. The bigger picture
These stories sit inside a broader realignment of the tech stack.
First, infrastructure is where the real money is flowing. The Wayve investment by AMD, Arm and Qualcomm is less about robotaxis and more about staying relevant in a world dominated by Nvidia. Tying future hardware roadmaps to AI‑heavy applications like autonomous driving is a hedge: if general‑purpose AI demand slows, verticals like AV could keep the fabs busy.
Second, we’ve seen the “too powerful to release” narrative before. Nuclear research, cryptography, even early encryption export controls all went through the same cycle: initial openness, panic over dual‑use risks, followed by stratified access. AI is heading that way, but with private companies – not states – holding most of the capability.
Third, the OpenAI vs Anthropic rivalry is gradually shifting from whose model scores higher on benchmarks to who owns the developer desktop and enterprise workflow. Claude Code’s strong reception at HumanX underlines that IDE integration, debugging helpers and code‑review agents are where daily habits form. Once a company standardises on a copilot‑style tool, switching providers becomes expensive.
Compare this to earlier platform shifts. In mobile, the battle was not just iOS vs Android, but who controlled the app store, payments and default apps. In cloud, raw compute gave way to managed services and proprietary APIs. AI is converging on the same pattern: models are the new compute, but moats form around data, distribution and integrations.
Lastly, tokenmaxxing is part of a cultural trend: AI as performance theatre. Leaders tout “AI‑powered everything”, internal dashboards celebrate skyrocketing prompt volumes, and boards ask for LLM strategies before they ask whether existing processes are even worth automating. The risk is a lost decade of superficial automation that optimises slide decks rather than productivity.
5. The European angle
For Europe, this moment is both a warning and an opening.
On the one hand, OpenAI’s acquisitions and Anthropic’s mega‑contracts underscore how far behind most European players are in terms of capital and scale. Even the continent’s strongest AI labs – think Mistral, Aleph Alpha or DeepL – cannot casually sign $50‑billion compute deals. If access to state‑of‑the‑art models depends on such arrangements, European digital sovereignty becomes a polite fiction.
On the other hand, the EU has something Silicon Valley does not: regulatory leverage. The upcoming EU AI Act, combined with GDPR, the Digital Services Act (DSA) and the Digital Markets Act (DMA), gives Brussels real tools to shape how tokenmaxxing‑style monitoring and closed‑door “too powerful” systems are deployed.
For example, an internal Meta leaderboard that effectively tracks worker behaviour via AI usage metrics will collide with European labour law and co‑determination traditions. Works councils in Germany or unions in France are unlikely to accept opaque metrics that drive performance reviews without clear accountability.
Wayve’s funding also matters regionally. Although the UK is outside the EU, its AV progress, backed by US chipmakers, could set expectations for continental regulators and carmakers. German OEMs and suppliers now face a future where core autonomy stacks may be controlled by non‑European entities running on US‑designed chips.
European enterprises, from banks in Frankfurt to manufacturers in Slovenia or Croatia, will increasingly have to choose: rely on US frontier labs under EU rules, or nurture regional alternatives that may be slightly less capable but more aligned with European values on privacy, labour and competition.
6. Looking ahead
Over the next 12–24 months, expect three things.
First, more vertical M&A by frontier labs. OpenAI moving into finance and media is only the start; healthcare, legal services and education are obvious next targets. Each acquisition bundles proprietary data, specialist workflows and distribution under a single model provider. Regulators in both the US and EU will eventually have to treat these as data‑centric mergers, not just classic tech deals.
Second, access stratification will deepen. Governments, hyperscalers and a few large corporates will get priority access to the most capable models, whether from OpenAI, Anthropic or others. SMEs and consumers will see slower, more constrained versions. That risks hard‑coding an AI productivity gap into the economy.
Third, the AI Anxiety Gap will become a mainstream political issue. As election cycles approach in the US and Europe, we will see more scrutiny of workplace surveillance via AI tools, job displacement in white‑collar professions, and concentration of AI capabilities in a few US‑based companies. Tokenmaxxing‑style internal metrics may become Exhibit A in debates about algorithmic management.
For practitioners, the smart move now is hedging: avoid hard‑locking yourself into a single proprietary ecosystem, prioritise data portability, and treat vendor “copilots” as interchangeable front‑ends rather than irreplaceable brains. Watch for practical enforcement of the EU AI Act, antitrust investigations into AI‑driven acquisitions, and any serious attempts to create shared public compute infrastructure.
The biggest open questions: Who will own the higher‑value layers above the model – domain‑specific agents, industry data networks, compliance tooling – and can any of those be genuinely open and European‑led?
7. The bottom line
The week’s AI news is less about quirky jargon like tokenmaxxing and more about a structural shift toward an AI economy run by a small compute elite. OpenAI’s acquisitions, Anthropic’s mega‑deals and Meta’s internal metrics all point in the same direction: centralised power wrapped in glossy productivity narratives. The urgent question for policymakers, enterprises and citizens alike is simple: do we accept this trajectory, or demand a more distributed, accountable AI infrastructure before the new aristocracy hardens in place?



