Ilya Sutskever: The Man Who Tried to Stop OpenAI, Then Left to Build Something More Dangerous
Quick summary
Ilya Sutskever co-founded OpenAI, voted to fire Sam Altman in 2023, then quietly left to start Safe Superintelligence — an AI lab with no products, no revenue targets, and a single goal: solve safety before building anything else. Here is the full story.
The Name Behind the Technology
When ChatGPT launched in November 2022 and added 100 million users in two months, most of the credit went to Sam Altman — the CEO, the face, the person doing the interviews. The name Ilya Sutskever appeared less often. Which is strange, because without Sutskever, there probably is no ChatGPT.
Ilya Sutskever is one of the three researchers who invented the transformer-based deep learning techniques that power virtually every major AI system today. He co-founded OpenAI. He served as its Chief Scientist for nearly a decade. He voted to fire Sam Altman. Then he left to build something he believes OpenAI stopped being willing to build: genuinely safe superintelligence.
This is his story.
The Research That Changed Everything
Sutskever was born in Russia, grew up in Israel, and completed his PhD at the University of Toronto under Geoffrey Hinton — one of the three "godfathers of deep learning." His doctoral work was foundational to modern neural networks.
In 2012, Sutskever, Hinton, and Alex Krizhevsky published AlexNet — a deep convolutional neural network that won the ImageNet computer vision competition by a margin so large it was initially suspected to be a mistake. AlexNet demonstrated definitively that deep learning could outperform decades of hand-engineered computer vision approaches. It is widely considered the paper that started the modern AI era.
After his PhD, Sutskever worked briefly at Google Brain before joining Sam Altman and Elon Musk to co-found OpenAI in 2015. He became Chief Scientist — the person responsible for the technical direction of the organisation's research.
What Sutskever Built at OpenAI
As Chief Scientist, Sutskever oversaw the research that produced:
GPT series — the language models that eventually became ChatGPT. GPT-1 through GPT-4 were developed under his scientific leadership.
InstructGPT and RLHF — Reinforcement Learning from Human Feedback, the technique that made GPT-4 useful as an assistant rather than just a text predictor. This is the core technique behind ChatGPT's ability to follow instructions and behave helpfully.
Codex — the model behind GitHub Copilot, which turned GPT into a coding assistant used by millions of developers.
DALL-E — the image generation model, developed under his broader research oversight.
Sutskever was not the only researcher on these projects — OpenAI has hundreds of researchers — but as Chief Scientist he was the primary scientific authority. The technical bets OpenAI made in the 2015–2024 period were substantially shaped by him.
The Board Crisis: Voting to Fire Sam Altman
In November 2023, OpenAI's board voted to fire CEO Sam Altman. The stated reason was that Altman had been "not consistently candid" with the board. Sutskever voted with the board majority to remove him.
Within days, the decision collapsed. OpenAI employees threatened to resign en masse. Microsoft, a major investor, pressured for Altman's reinstatement. Altman returned as CEO within a week. The board members who voted to remove him — including Sutskever — were replaced or resigned.
Sutskever's public statement after the reversal was notable: "I deeply regret my participation in the board's actions. I never intended to harm OpenAI."
The precise reasons for the board's original decision have never been fully disclosed. What emerged from reporting at the time was a picture of philosophical disagreement: board members and some researchers believed OpenAI was moving too fast, prioritising commercial products over safety research, and that Altman's communication with the board about the pace and direction of the company was inadequate.
Whether Sutskever's concern was primarily about Altman specifically, or about the direction of OpenAI more broadly, the effect was the same: the relationship was irrevocably changed.
Leaving OpenAI
In May 2024, Sutskever officially departed OpenAI. He had been largely absent from the office since the board crisis. His departure statement said simply: "After almost a decade, I have made the decision to leave OpenAI. The company is in good hands... I have a lot of good work ahead of me and I'm excited what comes next."
Sam Altman's response was warm and public: "Ilya, I will forever be grateful for your contributions to OpenAI and your friendship over the years."
What he was building next became clear within weeks.
Safe Superintelligence Inc.: One Goal, No Products
On June 19, 2024, Sutskever announced Safe Superintelligence Inc. (SSI) alongside co-founders Daniel Gross (former Y Combinator partner and head of Apple AI) and Daniel Levy (AI researcher and investor).
The company's stated mission is deliberately narrow: build safe superintelligence. Nothing else.
No chatbot. No API. No consumer product. No enterprise contracts. Just research toward a system that is both superintelligent and safe — in that specific combination, which Sutskever believes most AI development is currently neglecting.
SSI's founding statement: "We will not do anything else up until then. We will not be distracted by manageable short-term commercial pressures."
This was a direct implicit critique of OpenAI — a company that started with a similar safety-first mission statement and had, in Sutskever's view, allowed commercial pressures to compromise it.
The Fundraising Surprise
The conventional wisdom was that Sutskever's credibility as a researcher was enormous — but that a company with no product, no timeline, and no deliverable would struggle to raise capital.
The conventional wisdom was wrong.
In September 2024, SSI raised $1 billion from Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. In March 2025, it raised a further $2 billion, reaching a reported valuation of $32 billion. Google Cloud announced a partnership to provide TPU compute infrastructure.
$3 billion for a company with no product and no stated near-term deliverable. The investors were betting on one thing: Sutskever's track record and his belief that the current path of AI development is missing something fundamental.
The Viral Interview: The Era of Scaling Is Over
In late 2025, Sutskever gave a long interview with podcaster Dwarkesh Patel that went viral in AI research circles. The central claim: the era of scaling is over.
For years, the dominant paradigm in AI was simple: bigger models, more data, more compute equals better performance. This "scaling hypothesis" drove trillion-dollar investments in chips, data centres, and training runs.
Sutskever argued this paradigm has reached its limits. The easy gains from simply scaling up are exhausted. What comes next requires genuinely new ideas — new training approaches, new architectures, new ways of making models learn and generalise.
"We have returned to the era of research," he said. "The bottleneck is now ideas, not compute."
This is a provocative claim that contradicts the investment theses of major AI companies and chip manufacturers. It also explains why SSI exists: if the scaling approach cannot get to safe superintelligence, then someone needs to find the new ideas that can.
What SSI Is Actually Building
SSI is unusually secretive. Unlike most AI companies, it does not publish regular research papers, does not demo products, and does not communicate publicly about its technical approach.
What is known: the company employs a small team of elite researchers, operates dual offices in Palo Alto and Tel Aviv, and is working on what Sutskever describes as fundamentally new learning paradigms — approaches to AI training and architecture that go beyond the transformer-based scaling approach that has dominated the field since 2017.
In 2025, SSI's co-founder and CEO Daniel Gross departed. Sutskever assumed the CEO role himself — unusual for a researcher of his background, but perhaps necessary for a company where the research vision and the company direction are inseparable.
Why It Matters
Sutskever is not a pundit or a commentator. He is one of the three or four people in the world with the deepest practical understanding of how current AI systems work — having built the most successful ones from scratch.
When he says scaling is over and that genuinely new ideas are needed for safe superintelligence, it is not a hot take. It is a technical assessment from the person who probably understands the limits of current approaches better than anyone outside of a handful of labs.
Whether SSI succeeds in its mission is genuinely unknown. Building safe superintelligence is an open research problem that may take decades. But the existence of SSI — well-funded, elite team, no commercial distractions — represents a serious attempt to answer the question that Sutskever believes OpenAI stopped being willing to ask honestly.
For a breakdown of what his viral interview actually argued, read the Ilya Sutskever scaling interview explained.
Free Tool
Will AI replace your job?
4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.
Check Your AI Risk Score →Written by
Abhishek Gautam
Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.
Free Weekly Briefing
The AI & Dev Briefing
One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.
No spam. Unsubscribe anytime.
You might also like
"The Era of Scaling is Over": Ilya Sutskever's Interview Explained Simply
Ilya Sutskever said "the era of scaling is over" and it went viral. Here is what he actually meant, why it matters, what comes next in AI development, and whether he is right.
7 min read
OpenAI, Google, and Anthropic Are All Betting on India in 2026 — Here is What That Means
At the India AI Impact Summit 2026, the three biggest AI companies announced major India expansions simultaneously. OpenAI+Tata, Anthropic+Infosys, Google's $15B commitment. Here is what is actually driving this and what it means for Indian developers.
8 min read
OpenAI, Anthropic, and SSI All Say They Are Building Safe AI. They Disagree on What That Means.
Three companies, three completely different theories of how to build powerful AI responsibly. OpenAI ships fast and figures out safety later. Anthropic wants to understand before deploying. SSI refuses to launch any product until safety is solved. Only one approach can be right.
8 min read