Yann LeCun Raised $1.03 Billion to Prove the Entire LLM Industry Is Wrong

Abhishek Gautam··8 min read

Quick summary

Ex-Meta AI chief Yann LeCun's startup AMI Labs raised $1.03 billion in the largest-ever seed round by a European startup. He is betting that large language models are a dead end and that world models via JEPA architecture will win instead.

Yann LeCun won the Turing Award in 2018, spent seven years as Meta's Chief AI Scientist, and spent most of that time publicly telling anyone who would listen that large language models are fundamentally the wrong approach to building intelligent machines.

On March 10, 2026, his new company AMI Labs — Advanced Machine Intelligence Labs — announced it had raised $1.03 billion in seed funding at a $3.5 billion pre-money valuation. It is believed to be the largest seed round ever raised by a European startup.

LeCun is now putting over a billion dollars behind the thesis that the entire LLM industry has got it wrong.

What Is AMI Labs Building

AMI Labs is building what LeCun calls world models: AI systems that learn by predicting the next state of the physical environment, not the next word in a text sequence.

The distinction sounds academic. It is not.

A large language model trained on text learns statistical associations between tokens. It can answer questions, write code, summarise documents, and pass bar exams. It cannot reliably reason about physics, plan sequences of actions, or understand cause and effect in the way a two-year-old child can. Ask GPT-5 to plan a route through a room with furniture in it and it will hallucinate. Ask a toddler and they will walk around the chair.

LeCun has called this gap "Moravec's Paradox" — the observation that things that are hard for humans (calculus, chess) are easy for machines, while things that are trivially easy for humans (walking, grasping, recognising a face) remain genuinely hard for machines. LLMs excel at the former and are largely useless at the latter.

JEPA: The Architecture Behind World Models

AMI's technical approach is built around JEPA — Joint Embedding Predictive Architecture — a framework LeCun developed while at Meta.

The key difference between JEPA and generative AI (like GPT or Sora) is what the model predicts. Generative models try to predict every detail of the next output — every pixel, every word. This is computationally expensive and fills predictions with unpredictable noise.

JEPA predicts the future in an abstract representation space. Instead of asking "what will the next pixel be?", it asks "what is the causal structure that connects this state to the next state?" The model learns to ignore irrelevant details and focus on what actually changes causally.

This is closer to how human perception works. When you watch a ball roll across a floor, you do not mentally reconstruct every photon reflecting off its surface. You understand that the ball is moving, that it will decelerate due to friction, and that it will stop when it hits the wall. You have a world model.

The Investors

The $1.03 billion round was led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions — Jeff Bezos's personal investment vehicle. Strategic investors include Nvidia, Toyota, and Samsung. Individual investors include Tim Berners-Lee, Jim Breyer, Mark Cuban, and former Google CEO Eric Schmidt.

Nvidia's participation is notable. Jensen Huang has simultaneously bet heavily on LLM inference (Blackwell GPUs, NVLink, NIM microservices) and on physical AI and robotics. An investment in AMI Labs is consistent with that second bet — world models are exactly what advanced robotics requires.

LeCun's Criticism of LLMs

LeCun has been consistent and public in his scepticism of the LLM scaling approach for years. His core claims:

LLMs predict text. They do not understand the world. Scaling them — adding more parameters, more compute, more data — produces better text prediction but does not close the gap to genuine reasoning and planning.

LLMs hallucinate because they have no ground truth model of reality. They cannot verify their outputs against a world model because they have no world model. Every response is a plausible continuation of training data, not a reasoned inference about what is true.

LLMs are fundamentally passive. They respond to prompts. A truly intelligent system needs to have goals, to plan sequences of actions to achieve those goals, and to update its plans when reality diverges from expectation. LLMs cannot do this reliably.

His January 2026 departure from Meta and the formation of AMI Labs can be read as the culmination of a long-running disagreement with the direction of the AI industry — a disagreement he is now funding at scale.

What AMI Is Targeting

AMI Labs is not building a ChatGPT competitor. Its target applications are industrial process control, robotics, autonomous vehicles, wearable devices, and healthcare systems — domains where the ability to model physical reality and plan actions in it is essential, and where LLMs are genuinely inadequate.

This matters for developers differently than the OpenAI versus Anthropic competition. AMI is not competing for the next chatbot contract. It is competing for the next wave of applications that will exist after the current generation of LLM-based products hits the ceiling of what text prediction can do.

If LeCun is right, that ceiling is lower and closer than the LLM industry believes.

The Counter-Argument

The LLM industry's response to LeCun is not dismissive. Researchers at OpenAI, DeepMind, and Anthropic have pointed to chain-of-thought reasoning, reinforcement learning from human feedback, and tool use as evidence that LLMs can acquire reasoning capabilities that go beyond text prediction.

The honest answer is that nobody knows yet whether scaling LLMs will eventually produce world-model-equivalent reasoning, or whether a fundamentally different architecture — like JEPA — is required. LeCun is betting $1.03 billion that it will not. The LLM industry is betting its entire valuation that it will.

This is the most important architectural bet in AI right now.

Free Weekly Briefing

The AI & Dev Briefing

One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.

No spam. Unsubscribe anytime.

More on ai

All posts →
ShareX / TwitterLinkedIn

Written by

Abhishek Gautam

Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.