ByteDance Routes 36,000 Nvidia B200 Chips Through Malaysia to Beat US Ban

Abhishek Gautam··7 min read

Quick summary

ByteDance is deploying 36,000 Nvidia B200 Blackwell chips in Malaysia via a $2.5B deal, using Southeast Asian cloud firms to legally access hardware blocked under US export controls.

ByteDance is deploying roughly 36,000 Nvidia B200 Blackwell chips in Malaysia through a deal worth more than $2.5 billion, using a Southeast Asian cloud firm to access hardware that US export controls block from being sold directly to Chinese companies. The Wall Street Journal broke the story on March 13, 2026. Nvidia, ByteDance, and the intermediary firms declined to comment.

What ByteDance Actually Did

ByteDance worked with Malaysian cloud company Aolani Cloud and hardware firm Aivres to deploy approximately 500 Nvidia Blackwell computing systems in Malaysia. Each system contains around 72 B200 chips, putting the total at roughly 36,000 GPUs. At current market prices, 36,000 B200 chips represent more than $2.5 billion in hardware value if fully implemented.

The structure of the deal is the key detail. ByteDance did not buy the chips directly — US export controls prohibit selling advanced Nvidia GPUs to Chinese entities without a licence. Instead, the chips were acquired through an intermediary chain: Aivres supplied hardware to Aolani Cloud, which operates the data centre in Malaysia, and ByteDance accesses the compute as a customer. The chips never formally cross into China.

This is not a new playbook. Chinese technology companies have been routing AI compute through Southeast Asian data centres for over a year. What makes the ByteDance deal notable is its scale — 36,000 B200 chips is among the largest reported deployments of Blackwell hardware outside the United States.

Why Malaysia Specifically

Malaysia has become the preferred hub for this type of arrangement for several reasons. It has an established data centre industry, a stable regulatory environment, and no restrictions on importing Nvidia hardware. The country already hosts major hyperscaler infrastructure from Microsoft, Google, and AWS.

More importantly, Malaysia sits outside the tightest tier of US export control restrictions. The US Commerce Department's export control framework for AI chips divides countries into tiers based on national security risk. Malaysia falls into a middle tier where advanced chips can be imported and operated without the restrictions applied to China, Russia, or Iran directly.

The arrangement is technically legal. Aolani Cloud owns the chips and operates them in Malaysia. ByteDance accesses compute capacity as a cloud customer. There is no direct sale of restricted hardware to a Chinese entity. US regulators are aware of this pattern and have been studying whether to close the loophole, but as of March 2026, no rule change has been implemented.

The Broader China Chip Routing Network

ByteDance is one of many Chinese AI companies running this playbook. Reporting through early 2026 has identified similar arrangements involving:

  • Alibaba Cloud operating Nvidia infrastructure in Singapore and Indonesia
  • Tencent accessing compute through Vietnamese data centre partners
  • Baidu using Japanese cloud intermediaries for training runs that require hardware unavailable domestically
  • Several unnamed Chinese AI labs routing training jobs through Middle Eastern cloud providers before the Iran conflict disrupted that channel

The pattern reveals a structural gap in US export controls. The restrictions were designed to prevent advanced AI chips from reaching Chinese military and state entities. They work reasonably well for direct sales. They work less well when compute is delivered as a service through third-country intermediaries — a model that looks identical to legitimate commercial cloud usage.

US officials have reportedly been in discussions about closing this gap through updated guidance, but the challenge is that any rule broad enough to cover the ByteDance arrangement would also restrict legitimate multinational companies from running AI workloads in Southeast Asian data centres.

What the B200 Chips Are Used For

ByteDance says the Malaysia deployment is for AI research and development outside China and to serve growing global demand from its customers. ByteDance operates TikTok, which has over a billion active users globally, along with Douyin (the Chinese version), CapCut (video editing), and a growing portfolio of AI tools.

The B200 is Nvidia's latest Blackwell architecture GPU, offering roughly 2.5x the training performance of the previous H100 and significant improvements in inference efficiency. For ByteDance, access to Blackwell-class hardware matters for both training new recommendation models and serving inference at the scale TikTok requires — over a billion video recommendations per day.

The company has been investing heavily in its own AI models, including a large language model research effort that competes with Chinese rivals like Alibaba's Qwen and Baidu's ERNIE. Training competitive frontier models requires Blackwell-class hardware. Without the Malaysia arrangement, ByteDance would be limited to Huawei's Ascend 910C and older Nvidia chips already inside China — hardware that trails Blackwell by a significant margin.

What This Means for Developers

The ByteDance-Malaysia deal has direct implications for developers building on or competing with Chinese AI platforms:

API access and model quality: ByteDance's AI models power TikTok's recommendation engine and its growing developer-facing APIs. Access to Blackwell compute means ByteDance can continue training competitive models. Developers using ByteDance AI tools or building on platforms powered by their models will see continued improvement rather than stagnation from hardware deprivation.

The Southeast Asia compute market: Demand from Chinese companies routing around export controls is driving a secondary GPU market in Malaysia, Singapore, Vietnam, and Indonesia. Cloud compute in these regions is becoming more competitive and better-resourced. Developers in Southeast Asia and India with cost-sensitive AI workloads now have more regional options.

Export control gaps as a competitive factor: If US regulators close the Southeast Asia routing loophole, Chinese AI companies lose access to frontier hardware and fall further behind on model quality. If the loophole stays open, Chinese labs maintain competitive capability. This regulatory question will shape the AI competitive landscape for the next two to three years.

The Regulatory Response

US export control enforcement is intensifying. The Bureau of Industry and Security has expanded its Entity List in 2026 to cover more Chinese AI companies and their overseas subsidiaries. The challenge is that Aolani Cloud is a Malaysian company, not a Chinese entity — adding it to the Entity List would require evidence of knowing facilitation of controls evasion, which is a higher legal bar than simply operating in Malaysia.

Congress has been pushing for stricter controls, including proposals to require end-use verification for advanced chips sold to third-country cloud providers — essentially requiring cloud firms in Malaysia and Singapore to prove their customers are not Chinese AI labs. The practical enforcement challenge is enormous.

Nvidia, for its part, has consistently said it complies with all US export control requirements. Selling to Aivres, a US-registered hardware company, is legal. What Aivres does with the hardware after the sale is a question for regulators, not Nvidia.

Key Takeaways

  • ByteDance deployed 36,000 Nvidia B200 Blackwell chips in Malaysia via Aolani Cloud and Aivres, worth more than $2.5 billion — the largest known offshore Blackwell deployment by a Chinese company
  • The arrangement is technically legal — chips are owned by a Malaysian company and accessed by ByteDance as a cloud customer, not a direct sale to a Chinese entity
  • Southeast Asia has become a primary AI compute hub for China — Alibaba, Tencent, Baidu and others run similar arrangements through Singapore, Indonesia, Vietnam and Malaysia
  • US export controls have a structural gap — restrictions designed for direct sales do not effectively cover compute-as-a-service through third-country intermediaries
  • Regulatory response is underway but closing the loophole without disrupting legitimate multinational cloud usage is a hard policy problem
  • For developers: Southeast Asian cloud compute is becoming better-resourced as Chinese demand drives GPU deployments — regional pricing and availability will improve

Free Weekly Briefing

The AI & Dev Briefing

One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.

No spam. Unsubscribe anytime.

ShareX / TwitterLinkedIn

Written by

Abhishek Gautam

Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.