Jensen Huang: Nvidia Will Stop Investing in OpenAI and Anthropic
Quick summary
Nvidia CEO Jensen Huang announced Nvidia will no longer invest in OpenAI or Anthropic. Here's why the chip giant is pulling back and what it means for the AI industry.
Read next
- Nvidia Halts H200 China Production and Moves TSMC Capacity to Vera Rubin — What It Means for GPU Supply in 2026
- OpenAI, Anthropic, and SSI All Say They Are Building Safe AI. They Disagree on What That Means.
Jensen Huang Just Changed Nvidia's Relationship With the AI Industry
Nvidia CEO Jensen Huang has announced that Nvidia will no longer invest in OpenAI or Anthropic. The statement is brief but consequential. Nvidia has been a foundational investor and hardware supplier to both companies — the H100 and H200 GPUs that power GPT-4o, Claude 3.5, and every major frontier model run on Nvidia silicon. The decision to pull investment signals a strategic repositioning that goes beyond capital allocation.
This is not a dispute. Nvidia is not walking away from OpenAI and Anthropic as customers. Both companies will continue buying Nvidia chips — there is no viable alternative at scale in 2026. What Huang is signaling is that Nvidia is stepping back from the equity stake model, where the chip supplier also holds ownership in the companies it sells to. That is a meaningful line to draw, and the timing — as both OpenAI and Anthropic approach valuations in the hundreds of billions — makes the statement more pointed.
What Nvidia's Investment in OpenAI and Anthropic Actually Was
Nvidia's stakes in frontier AI labs have been strategic rather than financial. The company participated in funding rounds at both OpenAI and Anthropic not primarily to generate returns on equity but to deepen relationships with the most compute-intensive customers in the world. A frontier AI lab that trains models requiring tens of thousands of GPUs is exactly the customer Nvidia wants to keep close.
The structure of these investments also gave Nvidia visibility into research direction. Knowing where OpenAI and Anthropic are heading — which architectures they are exploring, what scale of compute they plan to deploy next — is enormously valuable for Nvidia's own roadmap planning. H100 succeeded because Nvidia knew what the transformer training workload required before most of the market understood the scale of the demand.
Pulling investment means giving up that visibility. Huang has decided that something about the current situation makes continued equity stakes in these companies less desirable than the information and relationship advantages they provided.
Why Nvidia Is Making This Move Now
Three plausible explanations exist, and the most likely answer involves all three operating simultaneously.
Conflict of interest is becoming harder to manage. OpenAI and Anthropic are no longer just AI research labs. Both are building full-stack AI platforms — APIs, consumer products, enterprise deployments. They are increasingly competitors with companies that are also Nvidia customers. Nvidia holding equity in OpenAI while simultaneously supplying GPUs to Google, Microsoft, Amazon, and Meta creates a conflict of interest that becomes more visible as the competitive dynamics intensify. Huang may be pre-empting pressure from other large customers to demonstrate that Nvidia is not tilting the playing field.
OpenAI and Anthropic are developing custom silicon. Both companies have announced or are rumored to be developing custom AI chips — following the path Apple, Google, and Amazon blazed with their own silicon programs. OpenAI has been working on custom inference chips. If and when these companies reduce their dependence on Nvidia hardware, Nvidia's strategic rationale for holding equity weakens substantially. Being an investor in a company that is trying to replace your core product is an uncomfortable position.
Regulatory scrutiny of vertical integration in AI is rising. Antitrust regulators in the US, EU, and UK have been examining the concentration of power in AI infrastructure. A company that supplies the dominant hardware layer and holds equity stakes in the dominant model layer draws exactly the kind of attention that creates regulatory risk. Nvidia pulling investment before being asked to is a defensive move that reduces the surface area for antitrust scrutiny.
The Nvidia-OpenAI Relationship Is More Complex Than It Appears
The public narrative positions Nvidia and OpenAI as natural allies — Nvidia builds the hardware, OpenAI builds the models, everyone benefits. The reality is more competitive.
OpenAI has been working on its own chip program. Reports from late 2025 described OpenAI designing custom inference accelerators with TSMC, targeting lower cost and higher efficiency for serving ChatGPT at scale. If those chips ship in volume, OpenAI's dependence on Nvidia for inference workloads decreases. Training at the frontier will likely remain on Nvidia hardware for years, but inference — which is where the money is at consumer scale — is the market OpenAI is trying to capture with custom silicon.
Nvidia is also not a passive hardware supplier anymore. CUDA, the programming model that locks developers into Nvidia GPUs, is the deepest moat in the AI stack. But Nvidia is increasingly building its own AI software — NIM microservices, NeMo for model customization, the Nvidia AI Enterprise platform. These are not hardware products. They are software products that sit in the same layer as OpenAI's API. The relationship between Nvidia and OpenAI is shifting from supplier-customer to supplier-customer-and-competitor simultaneously.
Huang pulling investment may be partly about clarifying that positioning before it becomes awkward.
What This Means for Anthropic
The Anthropic angle is different. Anthropic has been the more cautious AI lab — focused on safety research, more selective about deployment, backed primarily by Amazon ($4 billion) and Google ($300 million). Nvidia's investment in Anthropic was smaller in both absolute size and strategic significance than its OpenAI relationship.
But Anthropic is also the AI company most aligned with the hyperscalers who are Nvidia's largest customers. Amazon has built a deep hardware stack — Trainium and Inferentia chips — specifically to reduce AWS dependence on Nvidia for AI workloads. Anthropic models running on Trainium is an explicit strategy for Amazon to shift training and inference costs off Nvidia silicon.
Nvidia holding equity in Anthropic while Amazon-backed Anthropic trains on Trainium is a conflict of interest in the other direction. Huang may simply be cleaning up a position that was becoming untenable as Anthropic's hyperscaler alignment deepened.
Implications for the Broader AI Investment Ecosystem
Nvidia's withdrawal sets a precedent that will be watched carefully. Other infrastructure companies — Microsoft, Google, Amazon — all hold equity in AI labs they also supply. The conflicts of interest are significant at every level. If Nvidia's move triggers a broader conversation about whether infrastructure suppliers should hold equity in application-layer companies, the implications reach well beyond chips.
For the AI labs themselves, the shift is marginal in the short term. OpenAI and Anthropic are not dependent on Nvidia equity for capital — both have access to hundreds of billions of dollars through their hyperscaler relationships and direct funding rounds. Losing a strategic investor is more of a signal than a financial event.
The signal it sends is that even Nvidia — whose entire business depends on the AI training market growing as fast as possible — has concluded that clean separation between infrastructure and application layers is more strategically valuable than the information advantages of being an insider investor.
Developer and Enterprise Implications
For developers building on OpenAI or Anthropic APIs, Huang's announcement changes nothing immediately. The chips that power those APIs are still Nvidia chips, and that is not changing in 2026 or 2027.
The longer-term implication is that Nvidia is positioning itself as neutral infrastructure — the Switzerland of AI hardware. A company that holds no equity in any AI lab is more credibly neutral when selling to all of them. That neutrality matters as enterprise buyers push back on AI vendor lock-in and demand that their infrastructure suppliers do not have conflicted interests.
For developers choosing between AI infrastructure providers — deciding whether to run inference on AWS Trainium, Google TPUs, or Nvidia-based clouds — Nvidia's neutrality claim becomes a selling point. The company that makes money regardless of which AI lab wins is in some ways the best-positioned company in the ecosystem.
Key Takeaways
- Jensen Huang announced Nvidia will stop investing in OpenAI and Anthropic, stepping back from equity stakes in the two most prominent frontier AI labs
- Nvidia remains their largest hardware supplier — this is an investment decision, not a customer relationship change. Both companies will continue buying Nvidia GPUs
- Three likely drivers: conflict of interest as OpenAI and Anthropic compete with other Nvidia customers; both labs developing custom silicon that would reduce Nvidia dependence; and rising regulatory scrutiny of vertical integration in AI
- OpenAI's custom chip program is the most significant long-term threat — inference workloads at ChatGPT scale are where custom silicon ROI is clearest
- Anthropic's Amazon alignment — training on Trainium, deep AWS partnership — made Nvidia's equity stake increasingly uncomfortable
- The strategic signal: Nvidia is positioning itself as neutral AI infrastructure, a supplier to all labs rather than a stakeholder in any. That neutrality becomes a competitive advantage as enterprise demand for non-conflicted infrastructure grows
Free Weekly Briefing
The AI & Dev Briefing
One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.
No spam. Unsubscribe anytime.
More on Nvidia
All posts →Nvidia Halts H200 China Production and Moves TSMC Capacity to Vera Rubin — What It Means for GPU Supply in 2026
Nvidia has stopped all H200 chip production destined for China after both US export regulators and Chinese customs blocked shipments from both ends. TSMC capacity is now fully redirected to next-gen Vera Rubin. Here's what this means for global GPU availability, AI infrastructure pricing, and China's alternative AI stack.
OpenAI, Anthropic, and SSI All Say They Are Building Safe AI. They Disagree on What That Means.
Three companies, three completely different theories of how to build powerful AI responsibly. OpenAI ships fast and figures out safety later. Anthropic wants to understand before deploying. SSI refuses to launch any product until safety is solved. Only one approach can be right.
OpenAI Signed a Pentagon AI Deal Hours After Anthropic Was Blacklisted. What "Same Safeguards" Actually Means.
OpenAI will put its models on classified US military networks. Sam Altman says the Pentagon agreed to the "same safeguards" Anthropic refused to lower — mass surveillance and autonomous weapons. Here is the contrast and why it matters.
OpenAI Took the Pentagon Deal Anthropic Refused. 2.5 Million Users Are Quitting ChatGPT. Claude Hit #1.
Anthropic was blacklisted for refusing autonomous weapons access. OpenAI signed the same deal within hours. The backlash broke records — and sent users to Claude.
Free Tool
Will AI replace your job?
4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.
Check Your AI Risk Score →Written by
Abhishek Gautam
Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.