AI Chips News

IEEE – China’s Tech Giants Race to Replace Nvidia’s AI Chips

Replace Nvidia’s AI Chips

IEEE – China’s Tech Giants Race to Replace Nvidia’s AI Chips

For more than a decade, Nvidia’s chips have been the beating heart of China’s AI ecosystem. Its GPUs powered search engines, video apps, smartphones, electric vehicles, and the current wave of generative AI models. Even as Washington tightened export rules for advanced AI chips, Chinese companies kept settling for and buying “China-only” Nvidia chips stripped of their most advanced features—H800, A800, and H20.

But by 2025, patience in Beijing had seemingly snapped. State media began labeling Nvidia’s China-compliant H20 as unsafe and possibly compromised with hidden “backdoors.” Regulators summoned company executives for questioning, while reports from The Financial Times surfaced that tech companies like Alibaba and ByteDance were quietly told to cancel new Nvidia GPU orders. The Chinese AI startup DeepSeek also signaled in August that its next model will be designed to run on China’s “next-generation” domestic AI chips.

The message was clear: China could no longer bet its AI future on an U.S. supplier. If Nvidia wouldn’t—or couldn’t—sell its best hardware in China, domestic alternatives must fill the void by designing specialized chips for both AI training (building models) and AI inference (running them).

That’s difficult—in fact, some say it’s impossible. Nvidia’s chips set the global benchmark for AI computing power. Matching them requires not just raw silicon performance but memory, interconnection bandwidth, software ecosystems, and above all, production capacity at scale.

Still, a few contenders have emerged as China’s best hope: Huawei, Alibaba, Baidu, and Cambricon. Each tells a different story about China’s bid to reinvent its AI hardware stack.

If Nvidia is out, Huawei, one of China’s largest tech companies, looks like the natural replacement. Its Ascend line of AI chips has matured under the U.S. sanctions, and in September 2025 the company laid out a multi-year public roadmap:

  • Ascend 950, expected in 2026 with a performance target of 1 petaflop in the low-precision FP8 format that’s commonly used in AI chips. It will have 128 to 144 gigabytes of on-chip memory, and interconnect bandwidths (a measure of how fast it moves data between components) of up to 2 terabytes per second.
  • Ascend 960, expected in 2027, is projected to double the 950’s capabilities.
  • Ascend 970 is further down the line, and promises significant leaps in both compute power and memory bandwidth.

The current offering is the Ascend 910B, introduced after U.S. sanctions cut Huawei off from global suppliers. Roughly comparable to the A100, Nvidia’s top chip in 2020, it became the de facto option for companies who couldn’t get Nvidia’s GPUs. One Huawei official even claimed the 910B outperformed the A100 by around 20 percent in some training tasks in 2024. But the chip still relies on an older type of high-speed memory (HBM2E), and can’t match Nvidia’s H20: It holds about a third less data in memory and transfers data between chips about 40 percent more slowly.

The company’s latest answer is the 910C, a dual-chiplet design that fuses two 910Bs. In theory, it can approach the performance of Nvidia’s H100 chip (Nvidia’s flagship chip until 2024); Huawei showcased a 384-chip Atlas 900 A3 SuperPoD cluster that reached roughly 300 Pflops of compute, implying that each 910C can deliver just under 800 teraflops when performing calculations in the FP16 format. That’s still shy of the H100’s roughly 2,000 Tflops, but it’s enough to train large-scale models if deployed at scale. In fact, Huawei has detailed how they used Ascend AI chips to train DeepSeek-like models.

To address the performance gap at the single-chip level, Huawei is betting on rack-scale supercomputing clusters that pool thousands of chips together for massive gains in computing power. Building on its Atlas 900 A3 SuperPoD, the company plans to launch the Atlas 950 SuperPoD in 2026, linking 8,192 Ascend chips to deliver 8 exaflops of FP8 performance, backed by 1,152 TB of memory and 16.3 petabytes per second of interconnect bandwidth. The cluster will span a footprint larger than two full basketball courts. Looking further ahead, Huawei’s Atlas 960 SuperPoD is set to scale up to 15,488 Ascend chips.

Hardware isn’t Huawei’s only play. Its MindSpore deep learning framework and lower-level CANN software are designed to lock customers into its ecosystem, offering a domestic alternative to PyTorch (a popular framework from Meta) and CUDA (Nvidia’s platform for programming GPUs) respectively.

State-backed firms and U.S.-sanctioned companies like iFlytek, 360, and SenseTime have already signed on as Huawei clients. The Chinese tech giants ByteDance and Baidu also ordered small batches of chips for trial.

Yet Huawei isn’t an automatic winner. Chinese telecom operators such as China Mobile and Unicom, which are also responsible for building China’s data centers, remain wary of Huawei’s influence. They often prefer to mix GPUs and AI chips from different suppliers rather than fully commit to Huawei. Big internet platforms, meanwhile, worry that partnering too closely could hand Huawei leverage over their own intellectual property.

Even so, Huawei is better positioned than ever to take on Nvidia.

READ the latest news shaping the AI Chips market at AI Chips News

IEEE – China’s Tech Giants Race to Replace Nvidia’s AI Chips, source

Add comment

Follow us on LinkedIn!

Market News

🤖 aichipsnews.com – AI Chips

🔋 batteriesnews.com – Batteries

🍀 biofuelscentral.com – Biofuels

👩‍💻 datacentrecentral.com – Data Center

💧 hydrogen-central.com – Hydrogen

👁️ newsvidia.com – Nvidia

Join our weekly newsletter!

Please enable JavaScript in your browser to complete this form.

Your Header Sidebar area is currently empty. Hurry up and add some widgets.