Yann LeCun Is Done Being Meta’s Rebel. Now He Wants to Rethink Intelligence Itself

January 7, 2026
5 min read
Yann LeCun speaking on stage about artificial intelligence

Yann LeCun meets me in a near-empty Parisian dining room, wedged between two plastic Christmas trees, talking about “total world assistance.” He quickly corrects himself from “world domination” because, as he notes, that “sounds scary with AI.”

That tension—between ambition and anxiety—has defined LeCun’s recent decade at Meta. Now the 64-year-old Turing Award winner is walking away from Big Tech’s front lines to build his own lab, betting that today’s large language models are a dead end for superintelligence.

From Meta’s godfather of AI to independent "neolab" founder

Born in 1960 and raised in the suburbs of Paris, LeCun is one of the so‑called godfathers of modern AI, alongside Geoffrey Hinton and Yoshua Bengio. The trio’s work on deep learning earned them the 2018 Turing Award.

At Meta (then Facebook), LeCun built the company’s AI research unit from scratch. Mark Zuckerberg personally recruited him in 2013 over a dinner in California—"chicken with some pretty good white wine," LeCun recalls.

He only signed on under three conditions:

  1. He could keep his professorship at NYU.
  2. He wouldn’t have to move to California.
  3. The lab’s research would remain public.

Zuckerberg agreed. Facebook Artificial Intelligence Research (FAIR) was born, with what LeCun describes as a “tabula rasa with a carte blanche.” Money, he says, “was clearly not going to be a problem.”

A decade later, that relationship has unraveled. After more than 10 years as Meta’s chief AI scientist, LeCun is leaving to help build Advanced Machine Intelligence Labs, a new startup headquartered in France and led by Alex LeBrun, co‑founder and CEO of health‑care AI startup Nabla. LeCun will serve as executive chair, not CEO.

“I’m a scientist, a visionary,” he says. “I can inspire people to work on interesting things. I’m pretty good at guessing what type of technology will work or not. But I can’t be a CEO. I’m both too disorganized for this, and also too old!”

The Financial Times first reported his departure from Meta, triggering a wave of fundraising meetings and even a WhatsApp message from French president Emmanuel Macron, who was “pleased” that the new “worldwide” company will stay tightly linked to France.

The long view: From Bell Labs to deep learning’s breakout

LeCun has been obsessed with intelligence since he saw 2001: A Space Odyssey as a kid. His father, an aeronautical engineer and tinkerer, encouraged him to build things—model airplanes, instruments like the recorder and even the crumhorn, a “weird Renaissance instrument” he played in a dance band.

A teacher once told him he was too weak at math to study it at university, so he went into engineering instead. That detour eventually led him into one of computing’s coldest backwaters at the time: neural networks.

In the 1980s, AI based on neural networks was widely considered a dead field. LeCun went looking for “soulmates” and found Geoffrey Hinton, then at Carnegie Mellon and later at the University of Toronto, where LeCun joined him as a postdoc. Together with Yoshua Bengio, they laid the foundations of the deep learning revolution.

At AT&T Bell Labs in New Jersey in the late 1980s and 1990s, LeCun built convolutional neural networks—architectures that became the backbone of modern image recognition. His system to read handwritten digits was deployed at banks to read checks.

Bell Labs was almost comically flush. LeCun recalls his boss Larry Jackel telling him on arrival: “At Bell Labs? You don’t get famous by saving money.”

Later, as corporate restructuring gutted the lab, he moved back into academia at NYU, then into Facebook, just as deep learning was proving itself on image recognition benchmarks around 2013.

When ChatGPT blew up the roadmap

By early 2022—before ChatGPT—every major lab had some flavor of large language model. Most treated them as research toys.

Then OpenAI quietly wrapped its model in a chatbot and let anyone try it. The result: a global stampede toward generative AI.

Inside Meta, ChatGPT triggered a frantic reorg. Zuckerberg put “all their chips” on LLMs, according to LeCun, refocusing the company on Llama, its large language model family, and spinning up a generative AI unit charged with getting models into products as fast as possible.

LeCun pushed hard for openness. Llama 2, released in 2023 with open weights, was, in his words, a “watershed” moment that “changed the entire industry.” For a while, Meta was cast as the good guy of AI—open models versus the closed, centralized approach of OpenAI and Google.

But the sprint had side effects. As Meta doubled down on shipping productized LLMs, LeCun’s group, focused on more speculative architectures, struggled to get traction.

“We had a lot of new ideas and really cool stuff that they should implement,” he says. “But they were just going for things that were essentially safe and proved. When you do this, you fall behind.”

The fallout was brutal. Later Llama versions disappointed. Llama 4, released in April 2025, was widely seen as a flop; Meta was accused of gaming benchmarks. LeCun now concedes that the “results were fudged a little bit,” with different models quietly swapped in for different tests.

“Mark was really upset and basically lost confidence in everyone who was involved in this,” LeCun says. The generative AI organization was sidelined. “A lot of people have left, a lot of people who haven’t yet left will leave.”

Enter Alexandr Wang—and the breaking point

Meta’s next move was to buy speed and talent. In June 2025, the company invested $15 billion in Scale AI, the data‑labeling startup, and brought in its 28‑year‑old co‑founder and CEO Alexandr Wang to run a new frontier‑model effort called TBD Lab. Reports surfaced of Meta dangling $100 million sign‑on bonuses to poach elite researchers.

LeCun describes Wang as “young” and “inexperienced” on research culture, even as he acknowledges that Wang “learns fast” and “knows what he doesn’t know.”

“There’s no experience with research or how you practice research, how you do it. Or what would be attractive or repulsive to a researcher,” LeCun says.

Wang also became LeCun’s manager. LeCun insists that hierarchy wasn’t the issue—he has long worked with engineers half his age. The real tension was over direction.

“The crowd who were hired for the company’s new superintelligence push are completely LLM‑pilled,” he says.

That clashes with LeCun’s core belief that LLMs alone cannot get us to superhuman intelligence.

“I’m sure there’s a lot of people at Meta, including perhaps Alex, who would like me to not tell the world that LLMs basically are a dead end when it comes to superintelligence,” he says. “But I’m not gonna change my mind because some dude thinks I’m wrong. I’m not wrong. My integrity as a scientist cannot allow me to do this.”

Staying, he says, became “politically difficult.” At the same time, industrial partners outside Big Tech—from jet engines to heavy industry—were getting interested in his more radical ideas. Investors were ready to bankroll a spin‑out.

Why LeCun thinks LLMs hit a wall

LeCun doesn’t hate large language models. He thinks they’re useful. But he believes they’re fundamentally limited—and that Silicon Valley’s obsession with scaling them is misplaced.

The core problem: language is too narrow a window on reality.

To reach human‑level intelligence, he argues, machines must internalize how the physical world works, not just predict the next word in a sentence scraped from the internet.

His alternative is an architecture he calls V‑JEPA, a “world model” that learns from video and spatial data instead of text alone. This sits at the heart of what he labels Advanced Machine Intelligence (AMI).

World models aim to:

  • Build rich internal simulations of the physical world
  • Plan and reason over those simulations
  • Maintain persistent memory

His latest designs try to bake in something like “emotion” as a shortcut for past experience.

“If I pinch you, you’re going to feel pain,” he explains. “But then your mental model of me is going to be affected by the fact that I just pinched you. And the next time I approach my arm to yours, you’re going to recoil. That’s your prediction, and the emotion it evokes is fear or avoidance of pain.”

These emotional tags guide the system’s predictions, much like human feelings compress complex history into quick actionable signals.

LeCun thinks we’ll see “baby” versions of such systems in about 12 months, with larger‑scale versions a few years after that. It’s not superintelligence, but he sees it as a path toward it. “Maybe there is an obstacle we’re not seeing yet, but at least there is hope.”

The rise of the AI "neolab"

LeCun’s next act fits a pattern. He calls his new venture a “neolab”—a startup that does fundamental research first, products later.

He points to ventures like Thinking Machines, led by former OpenAI CTO Mira Murati—“I hope the investors know what they do”—and Safe Superintelligence, co‑founded by OpenAI’s ex‑chief scientist Ilya Sutskever—“There I know the investors have no idea what they do”—as examples of how frontier AI is drifting away from Big Tech’s standard product cycles.

In his own case, applications could land far away from social media: in jet engines, robotics, heavy industry, and any domain where understanding physical dynamics beats regurgitating text.

LeCun won’t run the company day to day. As executive chair, he wants the same freedom to chase research that he had at Meta, minus the politics of a public company trying to monetize every breakthrough.

"We suffer from stupidity"

Over lunch at Pavyllon—eggs, tuna tartare, foie gras, Comté soufflé, cod with herbed breadcrumbs, bricelets for dessert—LeCun oscillates easily between war stories from Bell Labs, anecdotes about New Jersey suburbia, and sharp jabs at the current LLM gold rush.

One thing he doesn’t waffle on is his legacy.

He wants to “increase the amount of intelligence in the world.”

“Intelligence is really the thing that we should have more of,” he says. With more intelligence, he argues, comes “less human suffering, more rational decisions, and more understanding of the world and the universe.”

Then he adds, almost as an aside: “We suffer from stupidity.”

Whether his world‑model bet pays off is still an open question. But after more than a decade as Meta’s in‑house dissident on LLMs, LeCun now has what he’s always wanted: a fresh lab, a big canvas, and investors willing to fund a different path to machine intelligence.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.