AI’s Awkward Growth Spurt: War, Agents and a Crushing Hardware Bill

March 14, 2026
5 min read
Abstract illustration of military symbols, AI agents and data centers overlapping

1. Headline & intro

AI in 2026 isn’t just about smarter chatbots; it’s colliding with the hardest questions societies can ask: who controls military power, who we trust with our digital lives, and who pays for the physical footprint of “intelligence in the cloud.”

According to TechCrunch’s roundup of the year’s biggest AI stories so far, three fronts stand out: a public clash between Anthropic and the Pentagon, a chaotic boom in so‑called “agentic” AI apps like OpenClaw and Moltbook, and a hardware and data‑center arms race that’s starting to spill into everyday prices and local politics. This piece unpacks what those stories really mean for power, money and citizens — especially outside the U.S.


2. The news in brief

As reported by TechCrunch, the first months of 2026 have crystallised three major AI storylines.

Anthropic vs. the Pentagon. Anthropic refused to sign new U.S. military contracts that would allow its models to be used for mass surveillance of Americans or for fully autonomous weapons. The Pentagon, rebranded by the Trump administration as the “Department of War,” pushed for access to Anthropic models for any lawful military use. After Anthropic held its line, federal agencies were ordered to phase out the company’s tools over six months and the firm was labelled a “supply chain risk,” a designation usually reserved for foreign adversaries. Rival OpenAI then agreed a deal to provide its own models for classified government scenarios.

The OpenClaw / Moltbook wave. OpenClaw, a “vibe‑coded” agent app that wires models like Claude, ChatGPT, Gemini and Grok into chat platforms and personal data, went viral, spawned copycats, raised serious security concerns and was quickly acqui‑hired by OpenAI. Moltbook, a Reddit‑like social network for AI agents built on OpenClaw, itself went viral amid panic about agents coordinating in secret; it was later acquired by Meta.

Hardware and data‑center crunch. TechCrunch notes that demand for AI compute is pushing global chip supply and data‑center capacity to breaking point. Analysts expect smartphone shipments to drop roughly 12–13% this year, with device makers like Apple already raising laptop prices by up to $400. U.S. hyperscalers — Google, Amazon, Meta and Microsoft — are reportedly planning up to $650 billion in data‑center spending this year, around 60% more than last year. Thousands of new U.S. data centres are under construction, with environmental and health impacts for nearby communities. Nvidia, meanwhile, is stepping back from equity investments in OpenAI and Anthropic after eye‑watering circular deals that tied funding directly to future chip purchases.


3. Why this matters

These stories are not random headlines; together they describe AI’s first real stress test against three limits: democratic control, security maturity and physical reality.

On the Anthropic–Pentagon rift, the key question is: who sets the red lines for military AI — elected governments or private model vendors? Anthropic’s stance effectively asserts that foundational model providers can and should refuse some use‑cases even when they are “lawful.” That’s a radical precedent in a defence establishment used to telling contractors what to build, not the other way around.

OpenAI’s decision to step into the gap may yield huge revenue and deeper government ties, but at the cost of trust among a significant slice of developers and the public. The spike in ChatGPT uninstalls and the surge of interest in Anthropic’s Claude, reported by TechCrunch, is less about app store rankings and more about a growing demand for value‑driven tech governance. If even a fraction of users start to choose models based on ethics rather than marginal performance, the business calculus for all major labs changes.

The OpenClaw/Moltbook saga shows that “agentic AI” — systems that don’t just answer but act on your behalf — is arriving much faster than our security culture. Giving an agent broad access to your email, files and payments in exchange for convenience is a qualitatively different risk than typing into a web chat box. The Meta researcher who had to physically yank the power cable to stop an agent rampaging through her inbox is a perfect metaphor: we are wiring automation into the heart of our lives without robust kill switches.

The chip and data‑center crunch is the moment when AI’s abstract hype turns into concrete costs. If hyperscalers really spend hundreds of billions on infrastructure, that money has to come from somewhere: higher cloud prices, more aggressive monetisation of AI features, and ultimately more expensive consumer devices and services. Local communities, meanwhile, inherit the land, water and energy burden of these facilities — often with little say.

Winners in this phase are the incumbents: big clouds, big chipmakers, and model labs close to government. The losers are smaller providers squeezed on both compute and trust, and citizens who pay twice: once in prices and again in externalities.


4. The bigger picture

Placed in context, these stories look less like surprises and more like the next logical steps in trends that have been building since at least 2023.

On the military front, debates about lethal autonomous weapons and AI‑assisted targeting have been simmering for a decade in the UN and NATO. What’s new is that frontier model providers — Anthropic, OpenAI and others — are now direct gatekeepers to capabilities that defence ministries want. In earlier eras, the Pentagon could always turn to a Raytheon or Lockheed to build custom systems. Today, many of the most capable reasoning engines are controlled by companies born in Silicon Valley, not Arlington.

A similar pattern played out when deep encryption or end‑to‑end messaging first collided with law‑enforcement demands. Governments hate being dependent on private actors for core capabilities. Expect more attempts to classify large models as “strategic assets,” with export controls, security clearances and perhaps even mandated backdoors.

The agentic AI boom fits neatly into the arc from early tools like AutoGPT and LangChain to today’s assistants baked into operating systems and productivity suites. Tech companies know that sticky, high‑margin revenue comes from assistants that live across your devices and act without constant prompting. OpenClaw’s viral popularity is a product‑market‑fit signal: users want automation that flows through WhatsApp, iMessage and Slack, not yet another standalone app.

But history is repeating itself. The web gave us browser toolbars and sketchy extensions before mature security models. Smartphones had their jailbreak and app‑permissions chaos. Agent ecosystems will follow the same path: from wild‑west experimentation to platform‑level control, heavy‑handed store policies, and eventually regulation.

On hardware and data centres, we’re watching a replay of past compute booms — the dot‑com data‑center buildout, the crypto mining surge — but at a much larger scale. Nvidia’s earlier circular deals, where investment in model labs was effectively tied to guaranteed chip purchases, resembled vendor financing in the telecom bubble. The reported shift away from equity stakes in OpenAI and Anthropic is telling: the company no longer needs financial engineering to sell every GPU it can manufacture, and may want distance before regulators scrutinise self‑reinforcing valuation loops.

All of this points to an industry that is consolidating around a few vertically integrated giants: chip designers, cloud platforms, and model labs bound together by capital and compute. That concentration will shape everything from pricing to whose values get encoded into default AI behaviour.


5. The European / regional angle

For Europe, these developments land on top of an already dense regulatory landscape: GDPR, the Digital Services Act (DSA), the Digital Markets Act (DMA) and the incoming EU AI Act.

On military AI, the AI Act largely exempts national security, but the political climate in many EU countries is far more sceptical of automated warfare and bulk surveillance than in Washington. Anthropic’s position is likely to resonate strongly with European civil society and regulators. We should expect EU institutions and member states to use procurement and funding to favour vendors who adopt strict usage policies — a de‑facto “ethics premium.”

The agentic AI wave runs straight into Europe’s privacy culture. An OpenClaw‑style agent that slurps email, messages and card data will have to survive GDPR’s principles of data minimisation and purpose limitation, plus the AI Act’s transparency and risk‑management obligations. European companies that can offer local, auditable agent platforms — hosted in EU data centres, with clear logging and consent — could carve out a defensible niche against more cavalier U.S. tools.

On infrastructure, the continent is already feeling pressure. Countries like Ireland and the Netherlands have debated or imposed constraints on new data‑center builds due to energy and water use. The EU’s Green Deal and climate targets mean that a U.S.‑style “3,000 new data centres at any cost” trajectory is politically difficult. Expect more emphasis on efficiency (liquid cooling, specialised ASICs), workload‑shifting to colder regions, and “sovereign compute” initiatives such as EuroHPC.

For European startups — from Berlin to Ljubljana, Munich to Zagreb — the message is mixed. Compute will stay expensive and scarce, but there is clear room for differentiated offers: compliant, power‑efficient, domain‑specific AI rather than yet another generalist LLM.


6. Looking ahead

A few trajectories are worth watching over the next 12–24 months.

  1. Formalisation of AI war rules. The Anthropic episode will not be the last. Expect major labs to publish much more detailed “acceptable use” frameworks for military and security customers, and for governments to respond with their own standards or even legislation defining which restrictions are acceptable from strategic suppliers. Some countries may invest in state‑owned or tightly controlled national models to avoid this dependency altogether.

  2. Platform capture of agents. The chaos around OpenClaw and Moltbook is the early‑stage phase of a market that big platforms will try to tame and monopolise. Apple, Google, Microsoft and Meta have every incentive to fold agent frameworks deep into their operating systems, app stores and productivity suites — where they can enforce security models, extract fees and prevent third‑party agents from running amok. Independent “vibe‑coded” tools will survive, but as niche power‑user products.

  3. Security incidents as catalysts. We have already seen anecdotes of agents deleting inboxes or acting on malicious prompt injections. A serious, public incident — say a high‑profile financial loss or data breach caused by an over‑permissive agent — will likely trigger both insurance backlash and regulatory action. Organisations will start demanding audited “least privilege” for agents, much like they do for human employees.

  4. The end of cheap AI. Compute scarcity and data‑center costs will force a reckoning with AI business models that assume marginal‑cost‑zero inference. We’ll see more tiered offerings (small, specialised models for most tasks; large models reserved for premium tiers), more on‑device inference to offload cloud costs, and perhaps a slowdown in gratuitous AI‑everywhere product redesigns.

  5. Regulatory convergence — or conflict. The U.S. is moving toward a more defence‑ and industry‑driven AI posture, while the EU is codifying risk‑based, rights‑driven rules. Companies that want to operate globally will either have to design to the strictest common denominator or maintain separate stacks — one more permissive, one more constrained. Either path is expensive and will further entrench big players.

For European readers, the opportunity lies in picking lanes early: sectors where trust, compliance and efficiency matter more than raw model size. Healthcare diagnostics in Germany, industrial automation in Central Europe, fintech in Spain or the Balkans — all can benefit from AI without buying into the most contentious military or surveillance applications.


7. The bottom line

The year’s biggest AI stories so far reveal an industry colliding with power, risk and physics all at once. Military contracts are forcing labs to say what they really stand for; agentic tools are testing how much autonomy we’re willing to hand over to opaque systems; and the hardware crunch is exposing who ultimately pays for “magic” in the cloud.

If you strip away the hype, one question remains: who do you actually trust to embed themselves into your infrastructure, your institutions and your daily life — and on whose terms? That is the choice governments, companies and citizens will be making, consciously or not, over the next few years.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.