SK Hynix and Samsung Are the Only HBM3E Suppliers. That Is the AI Bottleneck.
Quick summary
SK Hynix holds ~60% of HBM and has sold out through 2026. Only two firms make HBM3E at scale. HBM market to hit $54-58B in 2026. What developers should do.
Every Nvidia H100 and H200 runs on High Bandwidth Memory (HBM). Only two companies in the world make HBM3E at scale: SK Hynix and Samsung. SK Hynix holds roughly 60% or more of the global HBM market and has sold out its DRAM, NAND, and HBM supply to Nvidia and other major customers through 2026. That duopoly is the AI memory bottleneck, and it is not going away soon.
What HBM Is and Why Only Two Companies Make It
HBM stacks DRAM dies vertically and connects them to the GPU over a wide, fast interface. It is the only memory type that can feed today's large AI chips without limiting throughput. HBM3 and HBM3E are the current workhorse generations; HBM4 is in development for the next Nvidia platform (Vera Rubin). The process is difficult, yields are tight, and the number of firms that can do it at volume is exactly two: SK Hynix and Samsung, both in South Korea. Micron is ramping HBM but has not reached the same volume; TrendForce and others still put the bulk of HBM3E supply with the two Korean suppliers.
So for the bulk of the world's AI training and inference, the memory supply depends on two Korean suppliers. Nvidia does not have a second source in the way it has multiple logic fabs; for HBM, it is SK Hynix and Samsung or nothing at scale.
The Numbers: Market Share, Sold-Out Capacity, and Growth
- Roughly 60%: SK Hynix share of global HBM market (sources such as TrendForce; some estimates put it higher in HBM3E specifically)
- 70%: SK Hynix share of Nvidia initial HBM4 orders for the Vera Rubin platform (2026 to 2027)
- Sold out: SK Hynix has committed its DRAM, NAND, and HBM capacity through 2026 to Nvidia and other major customers; it cannot take new 2026 orders in the way it used to
- About two-thirds: share of total HBM shipments that will still be HBM3E in 2026 (HBM4 ramps later)
- USD 38 billion to USD 54 to 58 billion: projected global HBM market growth from 2025 to 2026 (around 50% growth)
- Samsung and Micron: Samsung is chasing HBM4 verification; Micron is the only plausible third supplier but at lower volume than the Korean duopoly
The HBM market is in a supercycle. AI cluster buildouts are driving demand far beyond what the two leaders can supply in the short term. That is why lead times for H100 and H200 have stayed long and why data center demand has helped push memory prices up and tightened supply for consumer and enterprise RAM. The HBM bottleneck and the broader RAM crisis are two sides of the same story: the same fabs make both HBM and DDR5, and HBM wins on margin.
Why There Is No Quick Fix
You cannot spin up HBM capacity in a year. The technology is hard, and qualification with Nvidia and AMD is strict. SK Hynix has locked in the lion's share of Nvidia HBM4 for Vera Rubin; Samsung is investing to close the gap. Micron will matter more in 2027 and beyond, but for 2026 the practical reality is two volume suppliers. Any disruption in Korea (natural disaster, geopolitical shock, or labour) would immediately tighten GPU availability worldwide.
What Developers and Infrastructure Teams Should Do
You cannot buy HBM directly. You buy GPUs. So the practical impact is on lead times, allocation, and pricing for H100, H200, and future Nvidia and AMD datacenter GPUs. If you are planning a large cluster, your procurement timeline should assume that HBM supply stays constrained through at least 2026. Lock in orders early, and do not assume spot availability or quick delivery. Diversification would require Micron to reach much higher volume or a shift to different memory architectures; neither is likely in the next 12 to 24 months.
Key Takeaways
- Two: only SK Hynix and Samsung make HBM3E at volume; Micron is ramping but not at the same scale
- Roughly 60%: SK Hynix share of global HBM; it has sold out capacity through 2026 to Nvidia and others
- 70%: SK Hynix share of Nvidia HBM4 orders for Vera Rubin (2026 to 2027)
- HBM market: projected to grow from about USD 38B (2025) to USD 54 to 58B (2026)
- For developers: Plan for continued HBM-driven constraints on GPU availability and lead times; the bottleneck is structural; lock orders early and do not count on spot supply
- What to watch: Micron HBM3E ramp, Samsung HBM4 verification, and Nvidia Vera Rubin HBM4 rollout in 2026 to 2027
More on Hardware
All posts →Malaysia Makes 13% of Global Chips. The AI Boom Depends on It
Malaysia produces 13% of global chips. Intel, Infineon, TI, and Chipbond run major packaging and testing in Penang. NSS secured RM63B. What developers should know.
AI Data Centers Are Consuming 70% of the World RAM. Memory Prices Have Quadrupled. Your Next Laptop Will Cost More.
Data centers will consume 70% of all memory chips produced in 2026. DRAM prices are up 300-400% from mid-2025. Budget PCs under $500 may disappear. Here is what is happening and what developers and buyers should do.
Japan Is Spending $67 Billion to Rebuild Its Chip Industry in 10 Years
Japan is investing $67B over 10 years. Rapidus targets 2nm in Hokkaido by 2027; TSMC Kumamoto is producing and building a 3nm fab. What it means for the supply chain.
India AI Impact Summit 2026: What I Saw in New Delhi and Why It Changed Things
I attended the India AI Impact Summit 2026 in New Delhi — the first global AI summit hosted by a Global South nation. Sam Altman, Sundar Pichai, Macron, PM Modi, $210 billion in pledges. Here is what actually happened and what it means for developers.
Free Tool
Will AI replace your job?
4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.
Check Your AI Risk Score →Written by
Abhishek Gautam
Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.
Free Weekly Briefing
The AI & Dev Briefing
One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.
No spam. Unsubscribe anytime.