Headline & intro
Meta is bleeding AI researchers to a little-known rival that now sits on the same infrastructure tier as Big Tech — and that should make everyone in the industry sit up. Thinking Machines Lab (TML) has quietly assembled a who’s‑who of deep learning talent while locking in access to Nvidia’s newest chips through Google Cloud. This isn’t just another hiring story. It’s a snapshot of a new AI order where the real competitive edge is a volatile mix of compute, equity, and culture. In this piece, we’ll unpack what Meta’s loss really means, why TML’s rise matters, and what this power shift signals for the next phase of the AI race.
The news in brief
According to TechCrunch, Meta veteran Weiyao Wang has left the company after eight years to join Thinking Machines Lab, an AI startup that just signed a multibillion‑dollar cloud agreement with Google. That deal gives TML early access to Nvidia’s latest GB300 chips and places it in a top infrastructure tier alongside players like Anthropic and Meta itself.
The move follows earlier collaboration between TML and Nvidia, and coincides with an ongoing tug‑of‑war for talent between Meta and TML. Business Insider reporting cited by TechCrunch indicates Meta has recruited at least seven of TML’s founding members. At the same time, LinkedIn profiles reviewed by TechCrunch suggest TML is aggressively hiring from Meta.
High‑profile hires at TML include CTO Soumith Chintala, co‑creator of the PyTorch framework, and other former Meta researchers involved in foundational work on segmentation models and multimodal systems. The startup’s headcount is now around 140 employees. Despite having only one publicly released product, TML is reportedly valued at about $12 billion, with investors clearly betting on its team and early‑access infrastructure rather than its current revenue.
Why this matters
The immediate takeaway is obvious: Meta is losing senior AI talent to a much smaller rival. But the deeper story is how the hierarchy of power in AI is shifting from "Big Tech vs. everyone" to a more nuanced ecosystem where well‑funded, well‑connected labs can realistically compete on the frontier.
TML benefits on several fronts. First, the Google–Nvidia cloud deal neutralises one of the classic startup disadvantages: lack of access to cutting‑edge compute. If you can tap the same GB300‑class hardware and tooling stack as the tech giants, the bottleneck becomes people and ideas, not GPUs. Second, by hiring people like Chintala and other long‑time Meta researchers, TML isn’t just accumulating résumés; it’s importing institutional knowledge about how to run large‑scale training, deploy models, and maintain research velocity.
Meta, on the other hand, is discovering the limits of compensation as a retention tool. Seven‑figure packages, as TechCrunch notes, still compete with something even more powerful for ambitious researchers: equity in a high‑growth, still‑undervalued lab. With TML’s valuation an order of magnitude below some of the most hyped frontier players, the upside for early employees remains substantial, especially if the company manages a breakout model or a lucrative partnership.
The losers, at least in the short term, are second‑tier AI startups and traditional enterprises trying to build in‑house AI teams. When a 140‑person lab can credibly promise GB300 access and a cap‑table lottery ticket, it becomes even harder for everyone else to attract senior talent. This concentrates innovation — and risk — into a handful of labs whose decisions will strongly shape how AI is developed and commercialised.
The bigger picture
TML’s trajectory fits neatly into a broader industry pattern: frontier labs are increasingly defined by three pillars — a strategic cloud alliance, privileged access to Nvidia hardware, and a cluster of star researchers who have cycled through Big Tech. OpenAI’s long‑term partnership with Microsoft, Anthropic’s deals with hyperscalers, and similar arrangements have already shown the blueprint. TML is essentially speed‑running that playbook.
Historically, generational shifts in computing have often coincided with talent migrations: think of the ex‑Google wave that helped shape early deep learning startups, or the PayPal alumni who seeded a generation of Web 2.0 companies. What’s different now is the scale of capital and compute involved. A lab valued at $12 billion, with one product on the market, would have been unthinkable a decade ago. The market is pricing not current output, but options on future AI dominance.
Meta’s position in this landscape is paradoxical. On the one hand, it remains one of the most important contributors to open‑source AI tooling (PyTorch, various vision and multimodal models). On the other, its strategy of open‑weight models and heavy internal research investment has not yet translated into a consumer AI product that defines the category the way ChatGPT did. That makes it harder to offer researchers the same narrative of being at the obvious centre of the AI universe.
TML’s rise suggests that the window is still open for new labs to join the top tier — provided they can stitch together compute, capital, and credibility fast enough. It also underlines how fragile moats based on talent really are: Meta spent more than a decade cultivating some of these researchers; TML hired them in a matter of months.
The European angle
For European readers, the Meta–TML talent skirmish is a reminder of an uncomfortable reality: the gravitational centre of frontier AI remains firmly in the US, increasingly clustered around a few labs wired into American cloud infrastructure and venture capital.
EU policymakers talk a lot about "technological sovereignty," but sovereignty without access to GB300‑class compute and top‑tier research leads quickly to irrelevance at the frontier. The EU AI Act, combined with GDPR and the Digital Services Act, sets an ambitious regulatory framework. Yet regulation without comparable investment in compute and talent simply pushes the most advanced experimentation elsewhere.
There is a small silver lining. As labs like TML scale, they inevitably start hiring remotely and opening satellite offices. That creates opportunities for European researchers and engineers to work on frontier systems without emigrating. For hubs like Berlin, Paris, Zurich, or Barcelona — already rich in machine learning talent — the arrival of new US‑backed labs could accelerate local ecosystems, much as Amazon, Google, and Meta research offices did over the past decade.
Still, there is a strategic risk: if Europe becomes mainly a labour market and a regulatory space, but not a locus for foundational models, its ability to shape the direction of the technology will be limited. The Meta–TML story shows that even US giants struggle to retain key people when a new lab offers better upside. European champions with shallower pockets will find it even harder unless they can differentiate on mission, governance, or public‑interest orientation.
Looking ahead
The obvious question is whether TML can convert its impressive roster and compute access into durable advantage. Building a frontier lab is easy to romanticise and brutally hard to execute. With roughly 140 people, TML is now at the size where coordination, not just pure research, becomes the main challenge. The company will have to define a product strategy that goes beyond "we have great people and lots of GPUs."
Expect three things over the next 12–24 months.
First, more consolidation and more bidding wars. Meta will keep poaching from TML and others; TML will continue to target Meta, OpenAI, Anthropic, and major autonomous driving and cloud teams. Salaries will remain high, but equity and research freedom will be the real differentiators.
Second, sharper scrutiny from regulators and policymakers. As a handful of labs concentrate compute, talent, and models, questions about systemic risk, safety practices, and competition will intensify. The EU AI Act includes provisions aimed at "systemic" general‑purpose models; labs like TML are precisely the kind of actor Brussels will watch, even if they are US‑based.
Third, more explicit alignment between cloud providers and specific labs. Google’s deal with TML isn’t just about selling compute; it’s about ensuring that when the next breakthrough model drops, it runs on Google’s stack, not a rival’s. That could lead to a world where choosing a lab is implicitly choosing a cloud and vice versa, further entrenching the hyperscalers.
The open question is whether any lab — including TML — can break out with a novel model or product category that justifies these valuations before investor patience runs thin.
The bottom line
Thinking Machines Lab’s raid on Meta’s AI bench, combined with its GB300‑powered Google Cloud deal, shows that the era of Big Tech monopolising frontier AI is over, but the replacement isn’t a decentralised utopia — it’s a small cartel of labs with privileged access to talent and compute. That’s exciting for innovation and worrying for everyone else. The real test will be whether new entrants like TML can turn this privileged position into broadly useful products, or whether we’re simply reshuffling power among a few well‑funded players. As users and voters, how comfortable are we with that trade‑off?



