Eridu’s $200M bet: rewriting the AI data center’s weakest link

March 10, 2026
5 min read
Illustration of a futuristic AI data center with dense networking hardware and glowing data links

1. Headline & intro

Everyone is fighting for GPUs, but the real knife fight in AI is quietly moving to the network. If you’re wiring together tens of thousands of accelerators, the slowest link in the system is no longer the chip – it’s how those chips talk to each other. That’s the bet behind Eridu, a new startup that just walked out of stealth with one of the largest Series A rounds we’ve seen in AI infrastructure. In this piece, we’ll look at what Eridu is actually changing, why networking is becoming the new chokepoint, and what this means for cloud providers, chip vendors and European AI ambitions.

2. The news in brief

According to TechCrunch, Eridu has emerged from stealth with a $200 million oversubscribed Series A round, bringing its total funding to $230 million. The round is led by Socratic Partners, veteran investor John Doerr, Matter Venture Partners and a long list of strategic backers including Hudson River Trading, Capricorn Investment Group, MediaTek, Bosch Ventures, TDK Ventures and an investing arm linked to TSMC.

Eridu was founded in 2024 by CEO Drew Perkins, a long‑time networking pioneer involved in early internet protocols and multiple successful networking and optical startups, together with co‑founder Omar Hassen, whose background is in networking chips at major silicon vendors.

The company is designing new AI‑oriented networking chips and full systems that aim to replace layers of traditional optical networking in AI data centers. The goal is to move more of the networking logic directly onto the silicon, reducing hops, cutting latency and power consumption, and ultimately improving the efficiency of large GPU clusters. Eridu has around 100 employees. Its valuation and exact target customers have not been publicly disclosed.

3. Why this matters

The AI conversation has been dominated by GPU scarcity and model sizes, but Eridu is attacking the layer that quietly defines the ceiling of what you can do with all that compute: the network fabric.

Training frontier models means wiring together thousands or even tens of thousands of accelerators. At that scale, the classic data center pattern—add more switches, more optical links, more tiers—starts to break down. Every additional hop adds latency and jitter. Every optical transceiver adds cost, power and another potential point of failure. You can buy more GPUs, but if the network starves them of data, you’re paying for silicon that sits idle.

Eridu’s thesis is simple and uncomfortable for incumbents: networking performance is improving far more slowly than compute performance. As TechCrunch reports, the company argues that GPU compute and memory bandwidth are growing by an order of magnitude annually, while mainstream data center switches move much more conservatively. That gap turns the network into the new bottleneck.

If Eridu’s approach—pushing more of the switching and connectivity functionality onto custom AI‑centric silicon and reducing the number of external optical stages—works, several groups stand to benefit:

  • Hyperscalers and AI labs could pack more effective compute into the same power and space budget.
  • Model providers would see lower training times and more predictable scaling behaviour.
  • Chip vendors like Nvidia, AMD and Intel would gain if their GPUs can be deployed in larger, better‑utilised clusters.

The potential losers are traditional network equipment and merchant‑silicon vendors whose roadmaps assume incremental Ethernet and optics evolution, not a rethinking of the entire stack. Even if Eridu never ships a dominating product, the funding size alone sends a signal: investors believe AI networking is a big enough pain point to justify rebuilding from the chip up.

4. The bigger picture

Eridu is not operating in a vacuum; it is part of a broader re‑architecture of the AI data center.

First, we’re witnessing vertical integration around AI workloads. Nvidia already sells not only GPUs, but also InfiniBand switches, NVLink interconnects and complete systems. Cloud providers like Google and Amazon design their own TPUs or accelerators and increasingly, their own custom network hardware. Eridu is effectively saying: you don’t have to be a hyperscaler or Nvidia to get an AI‑optimised network fabric.

Second, there’s a rise of specialised interconnect technologies. Optical startups are working on co‑packaged optics and in‑package photonics; others explore compute‑in‑network concepts or use emerging standards like CXL to blur lines between memory and network. Eridu’s “more on‑chip, fewer discrete optics” strategy fits the same trend: pulling latency‑sensitive functionality closer to compute and reducing analog complexity.

Historically, networking went through the opposite transition. The industry moved from proprietary fabrics to Ethernet and merchant silicon, which commoditised the market and squeezed margins. AI may reverse that trajectory in the high‑end segment: exotic topologies, custom protocols and application‑aware fabrics could bring back differentiated hardware—at least for the top 1% of clusters.

Eridu is also part of a capital wave into AI infrastructure beneath the model layer. After the explosion of foundation models, investors are now looking at power delivery, cooling, packaging, verification and, yes, networking. A $200 million Series A for a pre‑product infrastructure startup would have been unthinkable in the previous cycle; in the AI era, it signals a belief that there is a new multi‑billion‑dollar category forming: the “AI network stack”.

5. The European / regional angle

For Europe, Eridu’s emergence underscores a tough reality: the continent wants “sovereign AI” but still relies heavily on non‑European hardware stacks.

The EU is investing in EuroHPC supercomputers and national AI clusters, while policymakers in Brussels push the AI Act, the Green Deal and stricter reporting on energy use. All of that points in one direction: do more AI with fewer watts and less imported hardware dependency.

If Eridu can genuinely cut power consumption and space per unit of useful AI compute by simplifying the network, it becomes relevant for European cloud providers, telecoms and research centres—even if the company itself is US‑based. A more efficient fabric could make regional AI clusters in places like France, Germany, the Nordics or Central Europe more competitive versus US hyperscaler regions.

There is also a regulatory nuance. Under the EU AI Act and broader sustainability rules, large AI providers operating in Europe will have to report and, over time, likely reduce their environmental impact. Networking is a non‑trivial slice of data center power budgets; squeezing out inefficiencies at that layer supports both compliance and cost control.

European vendors—from Nokia and Ericsson on the telecom side to local data center operators and integrators—face a strategic choice. Either they integrate and wrap offerings from players like Eridu (or similar startups), or they risk watching the high‑end AI fabric market consolidate around US chip‑centric ecosystems. So far, most European activity in AI hardware has focused on accelerators and edge devices; Eridu is a reminder that owning the plumbing can be just as strategic as owning the chips.

6. Looking ahead

Designing custom silicon for networking is a long game. Even with deep experience and ample funding, Eridu still has to navigate design, tape‑out, validation and integration into real‑world clusters. That usually means years, not quarters.

In the near term, expect Eridu to focus on three things:

  1. Securing lighthouse customers among hyperscalers, AI labs or high‑end trading and research shops. One or two big design wins can validate the architecture.
  2. Building a software story. Fancy hardware is useless if it’s hard to program or manage. Whether Eridu exposes itself as “Ethernet‑compatible enough” or pushes a more radical fabric will determine adoption risk.
  3. Showing credible efficiency gains. Operators will want clear, auditable improvements in performance per watt and per euro spent, not just benchmarks in artificial scenarios.

There are also macro risks. AI capex has been on an extraordinary upswing; if that slows or reverses, experimental network architectures could be deprioritised relative to safer, incremental upgrades. Nvidia’s increasingly integrated stack is another headwind: when one vendor controls the GPU, software and interconnect, introducing a new fabric layer requires political as well as technical capital.

For European readers, the key question is whether local players—cloud providers, telcos, integrators—get involved early enough to influence how such fabrics support regional compliance, privacy and energy‑mix constraints. Those who wait for the market to “settle” may find that the winning AI fabrics have already been optimised for someone else’s regulatory and commercial realities.

7. The bottom line

Eridu’s $200 million Series A is a loud signal that the AI bottleneck has shifted from chips to the network. Whether the startup ultimately wins or not, it will pressure incumbents to rethink how they build fabrics for massive GPU clusters, with power and latency as first‑class design goals. For Europe in particular, the lesson is clear: sovereignty in AI is not just about owning models or data, but also the invisible infrastructure that moves bits between GPUs. The open question is who will actually step up to shape that layer.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.