Jensen Huang Told No Priors There Is No AI Bubble. Here Is the Economic Logic Behind His Argument.
Quick summary
Jensen Huang went on the No Priors podcast and pushed back on the AI bubble thesis directly. His argument is not based on hype. It is based on the economics of inference at scale and why cheap AI models increase demand rather than reducing it.
Every few months, someone publishes a piece arguing that the AI infrastructure build-out is a bubble. The data centers, the GPU orders, the multi-billion-dollar model training runs. The argument usually goes: this much capital cannot be justified, adoption is not matching investment, and when the correction comes it will be severe.
Jensen Huang disagrees. He has said so repeatedly, and his most detailed public articulation of why came in his appearance on the No Priors podcast. The argument is worth understanding in detail because it is not just a CEO defending his company's stock price. It is an internally coherent economic thesis.
The Core Argument: Inference Is the Business
Most public discussion about AI compute focuses on training. Training a frontier model costs hundreds of millions of dollars. That is the number that gets quoted in headlines.
Huang's point is that training is a one-time cost amortized over many uses. The real economic activity is inference: running the trained model for users who want answers. And inference demand has characteristics that make it grow as models get cheaper rather than stabilize or decline.
This is the mechanism he keeps returning to. When a model gets cheaper to run, more people use it. When more people use it, more use cases emerge. When more use cases emerge, the total amount of compute consumed goes up even though the cost per query went down. This is a version of what economists call the Jevons Paradox, which originally described how improving the efficiency of coal-burning engines increased total coal consumption by making more applications economically viable.
The AI version: making inference cheap does not reduce Nvidia's market. It expands the range of things people do with AI, which increases total inference volume, which increases total compute demand.
The DeepSeek Moment
In early 2025, DeepSeek released a model that matched GPT-4 level performance at a fraction of the training cost. The AI market interpreted this as evidence that frontier capabilities were getting cheap, and Nvidia's stock dropped roughly 15 percent in a single day as investors wondered whether the massive GPU orders would slow down.
Huang's response to this was patient and consistent. He argued that the DeepSeek result did not undermine the case for more compute. It strengthened it.
If you can get the same capability for one-tenth the compute, more companies can afford to use frontier-level AI. If more companies can use it, the number of inference requests goes up dramatically. If inference volume goes up, the total demand for GPUs goes up, not down.
The analogy he used is solar power. Solar panels getting dramatically cheaper did not reduce the total energy infrastructure investment. It accelerated it, because lower costs made many more solar projects economically viable. The same dynamic applies to AI compute, in his view.
What the Data Centers Are Actually Buying
Another part of Huang's argument is about what the hyperscalers and enterprises are actually ordering when they buy GPU clusters.
They are not just buying capacity for current workloads. They are buying optionality. A large language model deployed today might need to run at 10x the inference volume in 18 months if adoption continues at the current rate. Companies are building infrastructure headroom because the cost of being caught without it is worse than the cost of having too much.
He also points out that the use cases for AI inference are still early. Enterprises are running pilots and small deployments. When those pilot projects move to production at scale, the inference volumes jump significantly. The infrastructure being ordered now is partly for workloads that do not fully exist yet.
This is a bet on adoption curves, and it is the kind of bet that can look like overbuilding right up until the moment it looks like underbuilding. Huang thinks we are in the second phase of the internet build-out, which came after a real overcorrection but was eventually justified by the actual adoption of web-based applications.
The Skeptical View
The honest version of this analysis acknowledges where the bear case has merit.
Capital market cycles do not always follow underlying economic logic. Investors can build in too much optimism about adoption timelines, and the correction when it comes can be severe even if the long-term thesis is correct. The dot-com crash destroyed a lot of capital invested in companies that were right about the internet but wrong about the timeline.
There is also a substitution effect that Huang's argument does not fully address. If models get dramatically more efficient, enterprises might achieve their goals with less infrastructure than they currently plan to buy. The Jevons Paradox historically applies when the technology is enabling genuinely new applications. If AI is mostly substituting for existing software rather than enabling genuinely new categories of work, the demand expansion might be smaller than Huang projects.
The honest assessment is that Huang's thesis is coherent and the mechanism is real. The Jevons dynamic does apply to compute markets. But the magnitude of the demand expansion and the pace of adoption are both highly uncertain.
Why His View Matters Regardless of Whether He Is Right
Jensen Huang is not a neutral party. Nvidia sells the GPUs that AI companies are buying. His incentive is to believe the bull case, and his public communications will naturally emphasize it.
What makes his argument worth taking seriously anyway is that it is specific and falsifiable. He is not making vague statements about AI being transformative. He is making a concrete claim about the relationship between inference costs and inference demand.
If that relationship holds, then cheaper models mean more GPU sales, not fewer. If it does not hold, and enterprises achieve their AI goals with the infrastructure they already have rather than continuing to buy more, then the bubble thesis is closer to correct.
The data centers being built right now are the test. In 18 to 24 months, we will know whether inference demand grew fast enough to justify the capital deployed. Huang is betting it will. He has been right about this market longer than most people have been paying attention to it.
The bubble question, in the end, comes down to whether AI adoption curves look like the internet or like something that peaks earlier. Huang's bet is firmly on the internet side of that comparison.
Free Tool
What should your project cost?
Get honest 2026 price ranges for any project type — website, SaaS, MVP, or e-commerce. No fluff.
Try the Website Cost Calculator →Free Tool
Will AI replace your job?
4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.
Check Your AI Risk Score →Written by
Abhishek Gautam
Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.
Free Weekly Briefing
The AI & Dev Briefing
One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.
No spam. Unsubscribe anytime.
You might also like
Jason Calacanis vs Sam Altman: Why He Warned Developers Not to Build on OpenAI (2026)
Jason Calacanis publicly warned developers: don't build on OpenAI's API. He compared Sam Altman's playbook to Microsoft vs Lotus and Facebook vs Zynga. Here's what developers need to know about platform risk in 2026.
9 min read
Jensen Huang's Five-Layer AI Framework: An Honest Look at Where the US Leads and Where China Is Winning
NVIDIA CEO Jensen Huang described AI as a five-layer cake: energy, chips, infrastructure, models, and applications. He was unusually candid about where China is ahead. Here is a breakdown of each layer and what it actually means for the global AI race.
10 min read
AI in Latin America 2026: Brazil, Mexico, Colombia Tech Boom — What's Driving It
AI in Latin America 2026: Microsoft, Google, and AWS are investing billions in Brazil and Mexico. Brazil has the largest developer pool in the region; Mexico is the nearshore AI hub. Here's what's actually driving the boom.
9 min read