AI Chips News

AI chip race shifts to memory, boosting Samsung, SK hynix

ai chip memory samsung sk

AI chip race shifts to memory, boosting Samsung, SK hynix

As Intel challenges Nvidia, Korean memory-makers tighten HBM pricing power

Intel is gearing up to challenge Nvidia’s dominance in the artificial intelligence accelerator market, but its chief executive says the industry’s most decisive constraint lies elsewhere — and it overwhelmingly favors Korea’s memory champions.

Speaking at an AI summit in San Francisco earlier in the week, Intel CEO Lip-Bu Tan warned that a global shortage of advanced memory could persist for at least two more years. As AI systems scale rapidly, he said, memory demand is accelerating faster than suppliers can expand output. Nvidia’s next-generation AI platform, Vera Rubin, is expected to intensify that imbalance by sharply increasing memory consumption per system.

That dynamic reinforces the structural dominance of Samsung Electronics and SK hynix, which together control the overwhelming majority of the global high-bandwidth memory market — now the most critical component in AI computing.

While competition among graphics processing unit-makers and custom chip designers is widening, the memory layer is moving in the opposite direction: toward concentration.

Even before Intel’s renewed push, global technology firms were already working to reduce reliance on Nvidia by developing custom AI chips to optimize costs and workloads.

Examples include Google’s Ironwood TPU, Microsoft’s Maia 200 and Meta’s MTIA-v3, scheduled for launch in the first half of this year. Architecturally, these processors differ significantly. Operationally, they do not.

All require massive volumes of high-speed memory to function.

As AI models grow in size and complexity, the performance bottleneck is shifting from raw compute power to memory throughput — the speed at which data can be accessed and moved across chips. Training and inference now hinge on feeding enormous datasets with minimal latency, elevating memory from a supporting role to a system-level constraint.

This shift has seen HBM go from a niche accelerator component to a baseline requirement. HBM is already embedded in Google’s and Microsoft’s AI chips, and Meta plans to replace LPDDR5 with fifth-generation HBM3E in its MTIA-v3. Whether the processor is a GPU or a custom application-specific integrated circuit, HBM has become unavoidable.

An industry source, said:

You can diversify processors, but you can’t diversify away from HBM,

“That’s where Samsung and SK hynix have an unassailable lead.”

READ the latest news shaping the AI Chips market at AI Chips News

AI chip race shifts to memory, boosting Samsung, SK hynix, source

Add comment

Follow us on LinkedIn!

Market News

🤖 aichipsnews.com – AI Chips

🔋 batteriesnews.com – Batteries

🍀 biofuelscentral.com – Biofuels

👩‍💻 datacentrecentral.com – Data Center

💧 hydrogen-central.com – Hydrogen

👁️ newsvidia.com – Nvidia

Join our weekly newsletter!

Please enable JavaScript in your browser to complete this form.

Your Header Sidebar area is currently empty. Hurry up and add some widgets.