Sam Altman Predicts Superintelligence by 2028. Here's What a Working Developer Should Actually Do With That.
Quick summary
OpenAI's CEO said that by the end of 2028, more of the world's intellectual capacity could reside inside data centres than outside them. Whether or not he's right, the claim changes how you should be thinking about your career right now.
At the AI Impact Summit 2026, Sam Altman said something that went beyond the standard AI optimism loop.
"On our current trajectory, we believe we may be only a couple of years away from early versions of true superintelligence. If we are right, by the end of 2028, more of the world's intellectual capacity could reside inside data centres than outside of them."
He defined superintelligence as an AI that can perform better than any CEO or do better research than the best scientists. And he added: "We could be wrong, but it bears serious consideration."
This is not the first time Altman has made aggressive timeline predictions. But this is the most specific public claim he has made about when, and what he means by superintelligence. Paired with his call for a global AI governance body modelled on the IAEA, it signals that OpenAI is operating as if the timeline is real.
Two years is inside the window where career decisions made today are directly affected by what AI can do at that point.
What would actually need to be true for this claim to be correct
Before deciding how to respond to a prediction, it is worth asking what has to be true for it to be right.
Altman's 2028 claim requires several things to happen simultaneously. AI systems would need to demonstrate sustained superhuman performance not just on narrow benchmarks but on open-ended research tasks — generating novel scientific hypotheses, running experimental cycles, integrating results across domains. They would need to do this reliably enough that organisations actually deploy them in place of senior researchers and executives, not just as assistants.
Current state in early 2026: AI systems are performing well on specific hard tasks — solving competition mathematics, writing production code, passing professional licensing exams. GPT-5.3-Codex scores 56.4% on SWE-Bench Pro, meaning it can resolve more than half of real-world software engineering issues. That is impressive. But resolving a defined software issue and doing open-ended research that discovers something genuinely new are very different problems.
The 2028 timeline is aggressive by the standards of most AI researchers not inside OpenAI. Yann LeCun, Meta's chief AI scientist, publicly disagreed with the prediction. Most academic researchers would put superintelligence further out. The Metaculus community forecast, which aggregates predictions from forecasters, puts transformative AI later than 2028 in most scenarios.
Altman himself added the uncertainty caveat. He said they could be wrong. The honest reading is that OpenAI believes their internal scaling trajectories point toward this being possible, not that it is certain.
What the claim means for developers if it is approximately right
Assume, for the purposes of a career decision, that Altman is within 18 months of correct in either direction. That puts transformative AI capability arriving sometime between 2027 and 2030. What changes for a working software developer in that scenario?
The nature of software development work continues shifting from writing code to specifying and reviewing what AI-generated code should do. This has already started. The 2028 scenario accelerates it significantly. By 2028, in the Altman scenario, AI systems are handling not just coding tasks but systems design, architecture decisions, and debugging complex distributed system failures. Senior engineers are increasingly working as reviewers and integrators of AI-produced work rather than primary authors.
This changes what is valuable to know. The skills that remain high-value in this scenario are the ones that require understanding what correct looks like, not the ones that require producing outputs quickly. Domain knowledge, systems thinking, debugging intuition, and the ability to evaluate AI outputs against real-world requirements are all harder to replace than raw code generation.
The specific technologies that matter most in this scenario are the ones closest to the frontier: large language model APIs, AI agents, vector databases, evaluation frameworks, and model fine-tuning. Developers who can build systems that use AI effectively will have a long window of high value even as AI itself gets more capable.
What the claim means if it is significantly wrong
If Altman is wrong — if transformative AI arrives in 2032 or 2035 rather than 2028 — the practical implications for career decisions made today are not that different.
The direction of change is not in question. AI is getting better at software development tasks. The speed of that improvement has been consistent enough over the past three years that betting against continued progress would require a specific argument about why progress will slow, not just hope that it does.
The difference between 2028 and 2032 is primarily how much urgency you apply to adapting now versus over the next few years. In both scenarios, the developers who build deep familiarity with AI tooling and AI-augmented workflows before they become standard are better positioned than those who wait.
Three specific things worth doing in the next twelve months
The first is direct, sustained use of AI coding tools on real projects — not toy experiments, but the actual codebase you work on day to day. The gap between using AI for isolated coding tasks and using it effectively throughout an entire project is larger than most developers expect before they try it. You learn things about where AI fails, how to prompt it effectively, and how to review its output quickly that you cannot learn from reading about it.
The second is deliberate investment in skills that AI is structurally bad at replacing: system design judgment, production operations knowledge, and domain expertise in specific industries. These are the skills that let you evaluate whether an AI's output is correct in a context where correct requires knowing something about the real world.
The third is paying attention to what is happening at the frontier. Not in an anxious way, but in the way you would track any major technology shift in your field. Reading the technical papers, following what GPT-5.3-Codex and Claude Code are actually doing, and forming your own view of the trajectory based on evidence rather than either hype or denial.
Why Altman's framing is actually useful
The most common responses to AI prediction are dismissal and panic. Dismissal says the predictions are always wrong and nothing fundamentally changes. Panic says everything is about to end and there is nothing useful to do. Neither response is actionable.
Altman's framing — specific timeline, acknowledged uncertainty, call for institutional response — is more useful than either. It treats the possibility seriously enough to warrant preparation without claiming certainty about an inherently uncertain trajectory.
The career version of that framing: treat a 2028 or 2030 superintelligence as a realistic possibility worth preparing for, not a certainty, and not something to dismiss. Make the decisions that make sense if it happens, decisions that also remain sensible if it takes longer than expected.
That means learning AI tools deeply. It means building judgment and domain expertise. It means staying close to the frontier. None of these are bad bets regardless of exactly when the trajectory Altman describes arrives.
He may be wrong about 2028. He is almost certainly right about the direction.
Free Tool
Will AI replace your job?
4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.
Check Your AI Risk Score →Written by
Abhishek Gautam
Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.
Free Weekly Briefing
The AI & Dev Briefing
One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.
No spam. Unsubscribe anytime.
You might also like
Will AI Replace Developers in 2026? Companies Cited AI in 55,000 Job Cuts Last Year. Here Is the Real Answer.
Get your personalised AI risk score in 4 questions (free). Plus: will AI replace developers in 2026? What's actually happening to dev jobs and what to do next.
8 min read
Will AI Replace Humans? The Honest Answer Nobody Wants to Give
The most searched question in the world right now. Not the optimistic version, not the alarmist version — the honest one. What AI actually replaces, what it cannot, and what the transition looks like for real people.
9 min read
Which Jobs Will AI Replace First? The 2026 Reality Check
A specific, honest list of which jobs AI is already displacing, which are next, and which are genuinely safe. Based on what AI can actually do in 2026 — not speculation.
8 min read