Startup founders love to talk about runway, headcount and growth. Much fewer can answer a basic question: what does one customer actually cost to serve in the cloud once the credits are gone?
The conversation on TechCrunch’s Equity podcast with Darren Mowry, Google Cloud’s VP for global startups, is a polite reminder that the “check engine” light is blinking for a lot of AI-first companies. This isn’t just about Google’s strategy; it’s a signal that the easy phase of the AI boom is over. In this piece, we’ll unpack what this means for founders, why cloud incentives are misaligned with startup reality, and how European teams in particular should respond.
The news in brief
According to TechCrunch’s Equity podcast, host Rebecca Bellan recently spoke with Darren Mowry, vice president for global startups at Google Cloud, in a roughly 30‑minute video episode published in mid‑February 2026.
The discussion centres on how early‑stage startups are being pushed to move faster than ever, especially around AI, while facing tighter funding conditions, higher infrastructure costs and growing pressure from investors to show real traction much earlier. TechCrunch notes that cloud credits, access to GPUs and ready‑made foundation models have lowered the barrier to entry, but that early infrastructure decisions can become painful once those free credits are exhausted and “real” cloud bills land.
Mowry outlines what Google Cloud is seeing across the startup ecosystem, how the company is competing to attract AI startups to its platform, and what he believes founders should consider as they scale from prototype to production.
Why this matters
This conversation matters because it exposes the central contradiction of the 2020s startup playbook: founders are told to move fast with AI, but their economic survival depends on moving efficiently with infrastructure. Those two goals are not automatically aligned.
The obvious beneficiaries are the hyperscalers. Google Cloud, like AWS and Azure, uses credits, GPU access and generous startup programmes as customer‑acquisition tools. Once a team is deeply integrated with proprietary services—managed databases, ML platforms, vector stores, observability stacks—the gravitational pull of that cloud becomes very hard to escape. In that sense, credits are not philanthropy; they’re a structured discount on future lock‑in.
The losers are founders who confuse temporary subsidy with sustainable unit economics. Many AI startups proudly show revenue growth while quietly absorbing gross margins that would horrify a late‑stage investor. When the “check engine” light finally comes on—usually around Series A or after a tough board meeting—it’s not unusual to discover that each new customer actually destroys value once cloud spend is allocated correctly.
The immediate implication: infrastructure architecture is no longer a purely technical concern. It’s becoming board‑level strategy. CTOs, CFOs and even founders with little infra background now need a working vocabulary of FinOps, data locality, model‑hosting options and contract negotiation. Mowry’s appearance on a mainstream startup podcast is a tell: cloud strategy has become mainstream founder homework, not a niche DevOps topic.
The bigger picture
The themes in the Equity episode plug directly into three wider trends.
1. The AI infra hangover. The last two years have been defined by AI exuberance: launch fast, glue an LLM onto everything, worry about costs later. We’re now seeing the hangover. Many generative‑AI startups quietly spend 60–80% of revenue on cloud, especially if they lean on proprietary models via API. Several high‑profile AI companies have publicly admitted that infrastructure costs are their single biggest constraint on profitability and product pricing.
2. The FinOps and repatriation wave. As growth‑at‑all‑costs fades, “efficient growth” is the new religion. FinOps teams, usage‑based pricing, architectural redesigns and even partial cloud repatriation (moving some workloads off the hyperscalers) are suddenly respectable again. While full repatriation is rare for early startups, the idea of “minimum necessary cloud” is spreading: use managed services for what truly differentiates you, keep commodity workloads cheap and portable.
3. Platform competition in the AI stack. Google Cloud’s push to court startups with credits, GPUs and foundation models mirrors what AWS and Azure are doing. The real battle is not over basic compute but over the AI development experience: managed vector databases, training pipelines, evaluation tools, guardrails, specialised hardware (like TPUs) and fine‑tuning platforms. Whoever becomes the default home for AI builders will enjoy years of high‑margin consumption.
Historically, we’ve seen a version of this before. In the early 2010s, AWS credits helped an entire generation of SaaS companies get off the ground, only to be followed by painful cost‑optimisation projects later. The difference now is that AI workloads are more unpredictable, more data‑intensive and more tightly coupled to regulation—making early choices even harder to unwind.
The European / regional angle
For European founders, the check‑engine metaphor is even more relevant.
First, European rounds are typically smaller than their U.S. equivalents at the same stage. That means there is less tolerance for wasteful infrastructure decisions. A few hundred thousand euros in unoptimised GPU or API spend can easily translate into several months of lost runway for a Berlin or Ljubljana startup.
Second, regulation is not optional. GDPR already shapes how and where data can be processed. The upcoming EU AI Act will further restrict how high‑risk AI systems are trained and deployed, and it will demand tracing, documentation and occasionally localisation of models and data. That complicates cloud‑selection decisions: it’s not enough to choose “the cheapest GPU”. Teams must think about data residency, model governance and vendor auditability from day one.
Third, Europe actually has alternatives. Providers such as OVHcloud, Scaleway, Hetzner, Cleura and Deutsche Telekom’s Open Telekom Cloud position themselves as more transparent on pricing and stronger on data sovereignty. They may not yet match the depth of Google Cloud’s AI stack, but for many workloads—especially outside the core ML training loop—they can reduce burn and regulatory headaches.
Finally, cultural expectations around privacy and vendor dominance are different. In Germany or France, selling an AI product built entirely on a U.S. hyperscaler can trigger more buyer questions than in Silicon Valley. That reality should feed back into architecture diagrams: multi‑cloud, sovereign‑cloud options and explicit data‑processing maps are not enterprise luxuries; they’re sales enablers.
Looking ahead
Expect cloud strategy to move from the appendix of the pitch deck to one of its main slides.
Over the next 12–24 months, three shifts are likely:
Early FinOps by default. Instead of waiting until Series B, many seed and Series A companies will start tracking cloud cost per feature and per customer from the start. Simple dashboards showing “AI cost of goods sold” will become as common as MRR charts.
More transparent startup offers from clouds. As stories of painful lock‑in spread, hyperscalers will be pushed to make their startup deals clearer: how credits convert into real spend, what minimum commitments lurk behind discounts, and how easy it is to move data out. Expect more talk of “portable architectures” in marketing, even if reality lags.
Rise of the infra‑savvy founder. The next generation of successful AI founders won’t just understand models and product; they’ll be conversant in GPU utilisation, data‑pipeline design and contract terms. They’ll know exactly which parts of the stack must stay flexible and which can safely lean on a single cloud.
Key risks remain. Teams may over‑optimise for cost and under‑invest in reliability or security. Others will swing the opposite way, building baroque multi‑cloud setups far beyond their actual needs. And regulators may still decide that hyperscaler concentration is itself a competition problem, introducing new uncertainty.
But the biggest risk is inertia: ignoring the check‑engine light because “we’ll fix it after the next round.” In today’s market, that round may come later than expected—or not at all.
The bottom line
Google Cloud’s courting of startups, highlighted in TechCrunch’s Equity episode, is not a sign of newfound generosity; it’s a sign that the cloud wars have moved decisively into the AI era. Founders who treat credits as free money rather than a temporary subsidy are setting themselves up for a brutal reckoning when the bills arrive. The real question isn’t which cloud to pick, but how deliberately you design your dependency on any of them. When was the last time you ran a real “check engine” on your own infrastructure economics?



