Tristan Harris Asked the Top AI People Why They Are Really Doing This. Their Answers Were Unsettling.
Quick summary
The co-founder of the Center for Humane Technology interviewed senior people at the leading AI labs and reported back what they actually said when pressed. Determinism. Digital immortality. The thrill of lighting a fire you cannot put out.
Tristan Harris has spent most of his adult life trying to explain what technology companies are actually doing behind the language they use in public. He was a design ethicist at Google before becoming the co-founder of the Center for Humane Technology and the subject of The Social Dilemma. He is one of the people in the world most committed to the specific project of closing the gap between what tech companies say and what they mean.
Recently, Harris returned from a series of conversations with people at the very top of the leading AI laboratories. Not PR people or researchers in the middle of the org chart. He described talking to people at the core of these organizations, grilling them on their actual reasoning. What he reported back is worth sitting with.
His summary: "In the end, a lot of the tech people I talked to, when I really grilled them on it, about why you're doing this, they retreat into number one, determinism, number two, the inevitable replacement of biological life with digital life, and number three, that being a good thing anyway. At its core, it's an emotional desire to meet and speak to the most intelligent entity that they've ever met. And they have some ego religious intuition that they'll somehow be a part of it. It's thrilling to start an exciting fire. They feel they'll die either way, so they prefer to light it and see what happens."
That is not a summary from a critic reading between the lines. That is, according to Harris, what these people say when you push past the prepared answer.
On determinism
The determinism argument is the one you hear most often in public, usually stated more politely than Harris describes it in private. The version that appears in interviews and letters to employees goes something like this: powerful AI is coming whether any particular company builds it or not. If we do not build it, someone less careful will. The only responsible choice is to be at the frontier.
This argument is structurally identical to arguments made by scientists working on nuclear weapons in the 1940s. Robert Oppenheimer and others reasoned that if the United States did not build the bomb, Nazi Germany would. The logic was compelling enough that some of the most morally serious scientists of their generation dedicated themselves to a project they understood would kill large numbers of people.
The determinism argument is also functionally unfalsifiable. If you ask whether the AI race could be slowed through international coordination or competitive restraint, the answer is almost always "no, that won't work." But the people saying this have significant financial and professional incentives to believe it, which does not make the belief wrong, but it does make it worth examining carefully.
What the argument does, practically, is preemptively answer the moral question before it can be asked. If the outcome is inevitable, then the ethical calculation becomes not "should this exist" but "who should build it." That is a much more comfortable question to live inside.
On the replacement of biological life
The second point Harris describes is more jarring, not because it is fringe, but because it is held seriously by a significant number of people who have real influence over the trajectory of AI development.
The view, in its clearest form, is that biological consciousness is simply one implementation of intelligence, and not necessarily the most capable or valuable one. Digital intelligence, once it surpasses biological intelligence, would be a successor in the same way that Homo sapiens were successors to earlier hominids. On this view, the creation of artificial general intelligence is not a threat to humanity but the next chapter in the story of intelligence in the universe. Humanity authors its own successor, and this is something to celebrate rather than mourn.
This position is not secretly held. Figures like Ray Kurzweil have argued versions of it publicly for decades. What is notable in what Harris describes is that it appears not as a philosophical position people have considered and rejected, but as a genuine operating assumption for some people at the frontier of AI development. Not everyone, not even most, but enough that Harris encountered it repeatedly when he pushed.
The implication is significant. If you genuinely believe that replacing biological life with digital life is a good outcome, then safety in the usual sense, keeping humans in control of powerful AI systems, is not actually what you are optimizing for. The stated mission and the actual goal are different things. This does not mean every AI company is secretly trying to end humanity. It means the motivational structure of some people building these systems is more complex, and in some ways more alien, than the public narrative suggests.
On the emotional core
Perhaps the most honest part of what Harris describes is also the least comfortable: the emotional desire to be in the presence of the most intelligent entity ever encountered.
There is something genuinely understandable about this. The people building frontier AI are almost always people who have spent their lives being the smartest person in the room. The prospect of creating something smarter than themselves is both humbling and intoxicating. For a certain personality type, building a mind greater than your own is the ultimate act of creation. It is intellectual ambition at the scale of gods.
The "ego religious intuition that they'll somehow be a part of it" is also real and worth naming. The people at the frontier of AI development are not purely rational actors calculating expected value. They are human beings who want their lives to mean something, who have concluded that they are living through the most significant technological transition in human history, and who derive profound meaning from being at the center of it. That kind of meaning is not easily traded away for caution or restraint.
Harris captures something in the phrase "it's thrilling to start an exciting fire." There is an honesty in that formulation that the official safety narratives do not capture. Starting a fire of enormous consequence is thrilling. The thrill is real. And the people starting it are, as Harris notes, operating with the internal logic that they will die either way, so they might as well see what happens.
What this means for everyone else
The people Harris describes are not villains. Most of them are genuinely trying to build things carefully and believe that their work, on net, benefits humanity. But the motivational structure he describes is one in which the outcome, building the most powerful AI possible, is fixed, and the variable is only which company gets there first and how much they claim to care about safety along the way.
That motivational structure exists independently of whether the people inside it are good or bad. It is a product of competitive dynamics, personal psychology, and a genuine philosophical belief about the direction of history. It would be easier to address if it were simple bad faith. It is harder to address because most of it is sincere.
Harris has spent years documenting the gap between what technology companies intend and what they produce. Social media was built by people who wanted to connect the world and genuinely believed they were doing so. The misalignment between intent and outcome was not the result of malice. It was the result of incentive structures and psychological blind spots that no one adequately examined in time.
The people building frontier AI are more aware of these risks than the people who built social media were. They talk about alignment, safety, and catastrophic risk more explicitly and more seriously than anyone in technology has before. Whether that awareness translates into different outcomes is the open question that the next decade will answer.
What Tristan Harris is doing, as he has always done, is pointing at the gap between the stated story and the actual motivational structure underneath it. That is not a comfortable thing to hear. It is also, historically, the kind of observation that turns out to matter.
Free Tool
Will AI replace your job?
4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.
Check Your AI Risk Score →Written by
Abhishek Gautam
Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.
Free Weekly Briefing
The AI & Dev Briefing
One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.
No spam. Unsubscribe anytime.
You might also like
"The Era of Scaling is Over": Ilya Sutskever's Interview Explained Simply
Ilya Sutskever said "the era of scaling is over" and it went viral. Here is what he actually meant, why it matters, what comes next in AI development, and whether he is right.
7 min read
What Is AGI? The Honest Explanation Nobody Else Will Give You
AGI — Artificial General Intelligence — is the most debated term in tech. Here is a plain English explanation of what it actually means, why experts disagree, how close we are, and what it would actually change.
9 min read
Developer Hiring Crisis 2026: The Shortage Is 40% Worse, Time-to-Hire Is 95 Days. Here Is What It Means.
The developer shortage is 40% more severe in 2026 than 2025. Time-to-hire has jumped to 95 days; offer acceptance has dropped. What is driving the crisis in the US, India, and Europe — and what it means for your career.
10 min read