SK Hynix Places $8B ASML Order — Largest Ever — to Build HBM4 for Nvidia Vera Rubin
Quick summary
SK Hynix ordered $8B of ASML EUV machines on March 24, the largest disclosed order in ASML history. The tools will produce HBM4 for Nvidia's Vera Rubin platform launching late 2026.
Read next
- SK Hynix Orders $8 Billion in ASML EUV Machines — Largest Single Order in ASML History
- Tesla Terafab March 21: Musk Bets $25B on 2nm Chip Manufacturing
SK Hynix placed an $8 billion order for ASML EUV lithography machines on March 24, 2026 — the largest single disclosed order in ASML's history. The machines will go into two South Korean factories to produce High Bandwidth Memory for Nvidia's next-generation AI accelerator platform.
This is the memory industry's direct response to RAMageddon: the structural HBM shortage that has pushed DRAM prices up 171% year-over-year and is projected to constrain AI infrastructure through 2026. The problem is the solution won't arrive until late 2027.
Why $8 Billion and Why Now
SK Hynix is already Nvidia's dominant HBM supplier. The company supplies over 50% of Nvidia's total HBM demand in 2026, covering HBM3E for the current H200 platform and transitioning to HBM4 for the Vera Rubin platform launching later this year.
The $8 billion ASML order — comprising approximately 30 EUV machines delivered through 2027 — is designed to give SK Hynix the manufacturing capacity to hold that position as Nvidia scales to Vera Rubin volumes. Each EUV machine produces the extreme ultraviolet light required to etch the fine transistor patterns on HBM dies. More machines means more wafers per month and more HBM output.
The scale of the order also signals competitive pressure. Samsung is the primary supplier of HBM4 for Vera Rubin specifically. TrendForce reported in March that Samsung and SK Hynix are both tapped as Nvidia Rubin HBM4 suppliers, with Samsung taking the lead on the Vera Rubin-specific HBM4 allocation. SK Hynix needs to expand capacity to defend its overall share of the HBM market as the product transitions from HBM3E to HBM4.
Where the Machines Go
The EUV tools will be split between two facilities:
Yongin factory — SK Hynix's new flagship fab complex south of Seoul. Yongin is the company's largest single capital investment project, designed from the start for advanced node DRAM and HBM production. The site eventually covers 4.15 million square metres across four production buildings. Building 1 is in early production ramp now.
M15X facility in Cheongju — SK Hynix's dedicated HBM production centre. Cheongju already produces the majority of SK Hynix's current HBM3E output. The additional EUV machines expand HBM capacity at an existing site with proven infrastructure and a trained workforce, making M15X the faster path to additional HBM output.
The split is deliberate. Yongin builds long-term capacity for the next decade. M15X delivers incremental HBM output faster because the facility is already operational.
What HBM4 Actually Is and Why It Matters
HBM4 is the next generation of High Bandwidth Memory, designed for Nvidia's Vera Rubin platform which is expected to launch in late 2026. The key differences from HBM3E:
Bandwidth: HBM4 delivers approximately 1.5-2x the memory bandwidth of HBM3E. Vera Rubin needs this to feed the increased compute density of the new architecture.
Interface: HBM4 uses a 2048-bit memory interface, up from 1024-bit in HBM3E. Wider interface means more data can move between memory and processor per clock cycle.
Manufacturing complexity: HBM4 requires tighter lithography tolerances — which is the direct reason ASML EUV machines are needed. The older deep ultraviolet (DUV) machines that produced earlier HBM generations cannot hold the required tolerances for HBM4 at yield.
Power efficiency: HBM4 reduces power consumption per GB transferred by roughly 30% compared to HBM3E — critical for data centers trying to manage the power density of next-generation GPU racks.
The Timing Problem
The fundamental issue with SK Hynix's $8 billion order is that it doesn't solve the 2026 shortage. ASML EUV machines take 12-18 months to manufacture, test, and deliver. They then require 6-12 months of fab installation, calibration, and yield ramp before they produce meaningful output.
The machines in this order will begin delivering in early-to-mid 2027. Meaningful HBM output from the new capacity arrives in late 2027 at the earliest.
That means the HBM shortage is structural through the end of 2026, and likely through 2027 as Vera Rubin demand absorbs whatever new supply comes online. The $8 billion order signals the industry's commitment to solving RAMageddon — it does not solve it in the near term.
How This Connects to the Current DRAM Crisis
SK Hynix's decision to spend $8 billion on EUV machines specifically for HBM is both a solution and an amplifier of the conventional DRAM shortage.
More EUV machines for HBM means more wafer capacity dedicated to HBM production. Every wafer that produces HBM is a wafer that doesn't produce DDR5 or LPDDR5 for PCs, phones, and servers. The capacity expansion helps close the HBM gap but simultaneously maintains pressure on conventional memory supply.
The math changes when Yongin Building 1 and M15X reach full EUV capacity in 2027-2028. At that point, SK Hynix will have enough total fab capacity to serve both HBM demand and conventional memory markets at scale. Until then, the company has to optimise for the higher-margin HBM business.
What Samsung Is Doing in Response
Samsung is not standing still. ASML export restrictions already prevent China from accessing the same EUV machines SK Hynix is ordering — widening the technology gap between US-allied memory producers and Chinese fabs. Samsung's Pyeongtaek P4 fab — the world's largest semiconductor building by floor area — is in active production ramp for HBM4 and advanced DRAM. Samsung is the primary HBM4 supplier for Nvidia's Vera Rubin platform and has been shipping qualification samples since early 2026.
Samsung's strategy differs from SK Hynix's. Rather than concentrating HBM production at dedicated facilities, Samsung integrates HBM production into its broader DRAM manufacturing lines at Pyeongtaek. This gives Samsung more flexibility to shift capacity between HBM and conventional memory depending on demand — but at the cost of some HBM production efficiency.
The two companies are running parallel expansion programmes that will collectively add 40-50% global HBM supply capacity by 2028. That is the analyst consensus estimate for when RAMageddon ends.
What Developers and Infrastructure Teams Should Watch
Vera Rubin availability. SK Hynix's $8 billion investment is timed specifically to ensure HBM4 supply for Vera Rubin at scale. If Vera Rubin launches late 2026 as scheduled, expect cloud providers to begin offering Vera Rubin-class GPU instances in early 2027. Developers building training or inference infrastructure should plan for Vera Rubin access to become meaningful in 2027, not 2026.
HBM3E pricing in 2026. The current H200 and H100 platforms use HBM3E. That supply is also constrained and will remain so through 2026 as SK Hynix and Samsung prioritise HBM4 qualification work for Vera Rubin. Expect H200 GPU instance prices to remain elevated through the year.
The South Korea energy dependency. SK Hynix's Cheongju and Yongin facilities depend on South Korean grid power, a significant portion of which comes from LNG imports through the Strait of Hormuz. The Ras Laffan force majeure declaration on March 18 and the US-Iran ceasefire negotiations with the March 28 deadline directly affected power cost projections for these fabs. Any escalation that closes Hormuz again would increase SK Hynix's production costs and potentially delay the ramp timeline.
Key Takeaways
- $8 billion ASML EUV order is the largest single disclosed order in ASML's history — approximately 30 machines delivered through 2027
- Two facilities: Yongin (new flagship factory) and M15X Cheongju (existing HBM production centre)
- HBM4 for Vera Rubin: SK Hynix supplies over 50% of Nvidia's total HBM in 2026; Samsung leads on Vera Rubin-specific HBM4
- Timing problem: new machines won't produce meaningful HBM output until late 2027 — the shortage is structural through 2026
- HBM4 vs HBM3E: 2x bandwidth, 2048-bit interface, 30% better power efficiency — Vera Rubin requires it to function at full performance
- The RAMageddon connection: more HBM capacity means fewer wafers for conventional DRAM — SK Hynix is solving the AI shortage while maintaining pressure on PC/smartphone memory markets
- Vera Rubin for developers: meaningful cloud GPU access expected early 2027, not late 2026
Free Weekly Briefing
The AI & Dev Briefing
One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.
No spam. Unsubscribe anytime.
More on Semiconductors
All posts →SK Hynix Orders $8 Billion in ASML EUV Machines — Largest Single Order in ASML History
SK Hynix placed an $8B order for ASML EUV machines on March 24, 2026 — the largest single order in ASML history. Here's what it means for HBM4, Nvidia Vera Rubin, and AI infrastructure.
Tesla Terafab March 21: Musk Bets $25B on 2nm Chip Manufacturing
Tesla launches Terafab on March 21, a $25B bet on 2nm AI chip manufacturing targeting 100K wafer starts per month to end Nvidia and TSMC dependence.
Supermicro Co-Founder Arrested: $2.5B Nvidia Chips Smuggled to China via Fake Server Replicas
Supermicro co-founder Wally Liaw was arrested March 19 for smuggling $2.5B in Nvidia B200 and H200 GPUs to China. The scheme used Southeast Asian front companies and staged fake servers to fool US auditors.
RAMageddon 2026: DRAM Prices Up 171% as AI Hyperscalers Crowd Out Everyone Else
DRAM prices rose 171% year-over-year and DDR5 spot prices quadrupled since September 2025. AI data centers are consuming memory supply faster than fabs can produce it.
Free Tool
Will AI replace your job?
4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.
Check Your AI Risk Score →Written by
Abhishek Gautam
Software Engineer based in Delhi, India. Writes about AI models, semiconductor supply chains, and tech geopolitics — covering the intersection of infrastructure and global events. 355+ posts cited by ChatGPT, Perplexity, and Gemini. Read in 121 countries.