Ilya Sutskever × Dwarkesh Patel: The Full Interview Explained
Quick summary
Ilya Sutskever's November 2025 interview with Dwarkesh Patel is one of the most important AI conversations of the year. Here's what he actually said about scaling, SSI, and what comes next — in plain English.
Why This Interview Matters
On November 25, 2025, Ilya Sutskever sat down with Dwarkesh Patel for what turned out to be only his second long-form interview since leaving OpenAI. Sutskever is not a routine tech executive doing press rounds. He is one of a handful of researchers who can credibly claim to have helped shape modern AI — co-founding OpenAI, leading the team that built GPT-3 and GPT-4, and departing to start Safe Superintelligence (SSI) with a company worth $32 billion that has yet to release a product.
When Sutskever speaks, the AI research community treats it as signal. This interview generated more debate than almost anything else in the AI world in late 2025. Here is what he actually said.
The Central Claim: The Age of Scaling Is Ending
The most-discussed part of the interview is Sutskever's declaration that the era defined by "scaling laws" — the period from roughly 2017 to 2025 where adding more compute and more data to transformer-based models reliably produced more capable AI — is coming to an end.
His exact framing: *"The data is finite. There is only one internet. Pre-training as we have known it is over."*
What does this mean? The scaling era worked because researchers discovered that making models bigger and feeding them more data produced reliably better results — and the ceiling was not visible. For several years, researchers could predict how much better a model would be by how much more compute they used to train it. This predictability was enormously valuable for planning and investment.
Sutskever's argument is that this particular engine has limits that are now visible. The world's text data has been largely consumed. Training on the same data repeatedly degrades rather than improves. The path of "scale up the transformer, add more data" is hitting diminishing returns.
This does not mean AI stops improving. Sutskever is explicit that AI will continue to advance dramatically. What he is saying is that the mechanism will change.
What Comes After Scaling: The Age of Research
Sutskever's framing for what follows is "the age of research" — a period where progress comes from genuinely new ideas rather than from applying more compute to existing approaches.
The approaches he identified as most promising:
Synthetic data. If natural training data is becoming scarce, models can generate their own training data. Models writing problems and solutions for themselves to learn from. This is what OpenAI's o1 and o3 reasoning models already do at inference time (using extended computation to "think" through problems step by step). Extending this to training is an active research frontier.
Models that learn from experience. Sutskever pointed to this as the key unsolved problem. Current language models are trained on fixed datasets and then "frozen" — they do not update their knowledge from new experiences the way humans do. A model that could genuinely learn from every conversation it has, from every piece of new information it encounters, would be qualitatively different from current systems. This is the research challenge he is working on at SSI.
Better world models. Current LLMs are good at predicting and generating text. They are less good at building a model of the physical and causal structure of the world — understanding that dropping something causes it to fall, that intentions exist behind actions, that time is sequential. Progress here would enable AI systems to reason more like humans and less like very sophisticated autocomplete.
The SSI Question: $32 Billion With No Product
One of the most striking facts about Safe Superintelligence is its valuation. SSI has raised $32 billion, has no product, and has not published a paper. Sutskever was asked directly about this in the interview.
His answer: SSI's purpose is singular — to build safe superintelligence. Not to build a product. Not to build a business. To solve the alignment problem at the level of superintelligence, not just at the level of the current generation of AI.
The implication of this framing is that SSI is running a research program, not a startup in the conventional sense. The money buys the compute, the talent, and the time to do research that will not produce near-term revenue. Sutskever argued explicitly that the only way to do this work correctly is to not have the commercial pressure that distorts research priorities at companies like OpenAI, Anthropic, and Google DeepMind.
Sceptics point out: $32 billion is a lot of money to raise without a theory of how investors get it back. Sutskever did not address the economics of SSI's funding structure directly.
The Alignment Generalisation Problem
The part of the interview that received less coverage but may be most important is Sutskever's description of what he calls the "alignment generalisation problem."
Current alignment techniques — RLHF (Reinforcement Learning from Human Feedback) and its successors — work by having humans rate model outputs and training models to produce outputs humans rate highly. This works reasonably well for the capability levels of current models.
Sutskever's concern: these techniques may not generalise to much more capable systems. A model that is much smarter than any human evaluator can produce outputs that humans rate highly while pursuing different goals than the humans intend. The model learns to satisfy the metric, not the underlying intent.
This is not a theoretical future problem in Sutskever's framing — it is the core research problem that SSI exists to solve. And it is not solved. Current alignment research, he argued, is fundamentally insufficient for the systems that will exist in five to ten years.
What This Means for Developers and AI Users
AI tools will keep improving, but differently. The next generation of improvements to tools like Cursor, Claude, and ChatGPT will not come primarily from "we trained on more data." They will come from better reasoning architectures, better use of synthetic training data, and better integration of learning from use. The improvements will continue but the character of them will change.
The "post-scaling" models are already here. OpenAI's o1 and o3 series, Anthropic's extended thinking mode in Claude 3.5/3.7, and Google's Gemini Thinking are early examples of reasoning-focused models that go beyond pure scaling. These will get dramatically better.
Alignment matters more as models get stronger. This is the core of Sutskever's argument and the reason SSI exists. For developers building applications on top of AI systems, the question of whether AI systems reliably do what you intend — not just what you said — will become more practically important as the systems become more capable.
The Dwarkesh Patel Interview Itself
Dwarkesh Patel's podcast — which has featured Sam Altman, Elon Musk, Andrej Karpathy, Dario Amodei, Mark Zuckerberg, and others — has become the single most important long-form interview format in the AI world. Patel's approach is distinctive: he does deep research, asks follow-up questions that push beyond talking points, and lets interviews run long (this one was over two hours).
Sutskever chose Dwarkesh for his second-ever long-form interview after leaving OpenAI. That choice signals something: this is where the AI research community has its serious conversations. The fact that the interview happened and was public is almost as significant as its contents.
The full interview is available on the Dwarkesh Podcast channel. For a technical audience, it is worth watching in full. This summary captures the main threads but the full conversation has considerably more nuance than any summary can convey.
Free Tool
Will AI replace your job?
4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.
Check Your AI Risk Score →Written by
Abhishek Gautam
Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.
Free Weekly Briefing
The AI & Dev Briefing
One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.
No spam. Unsubscribe anytime.
You might also like
What Is AGI? The Honest Explanation Nobody Else Will Give You
AGI — Artificial General Intelligence — is the most debated term in tech. Here is a plain English explanation of what it actually means, why experts disagree, how close we are, and what it would actually change.
9 min read
Will AI Replace Humans? The Honest Answer Nobody Wants to Give
The most searched question in the world right now. Not the optimistic version, not the alarmist version — the honest one. What AI actually replaces, what it cannot, and what the transition looks like for real people.
9 min read
Which Jobs Will AI Replace First? The 2026 Reality Check
A specific, honest list of which jobs AI is already displacing, which are next, and which are genuinely safe. Based on what AI can actually do in 2026 — not speculation.
8 min read