STAMFORD, Conn., October 29, 2024 — Global semiconductor revenue is projected to grow 14% in 2025 to, according to the latest forecast from Gartner, Inc. In 2024, the market is forecast to grow 19% and reach .
Following a decline in 2023, semiconductor revenue is rebounding and expected to record double-digit growth in 2024 and 2025 (see Table 1). “The growth is driven by a continued surge in AI-related semiconductor demand and recovery in electronic production, while demand from the automotive and industrial sectors continues to be weak,” said Rajeev Rajput, Senior Principal Analyst at Gartner.
The worldwide memory revenue market is forecast to record 20.5% growth in 2025, to total $196.3 billion. Sustained undersupply in 2024 will fuel NAND prices up 60% in 2024, but they are poised to decline by 3% in 2025. With lower supply and a softer pricing landscape in 2025, NAND flash revenue is forecast to total in 2025, up 12% from 2024.
DRAM supply and demand will rebound due to improved undersupply, unprecedented high-bandwidth memory (HBM) production and rising demand, and the increase in double data rate 5 (DDR5) prices. Overall, DRAM revenue is expected to total in 2025, up from in 2024.
AI Impact on Semiconductors
Since 2023, GPUs have dominated the training and development of AI models. Their revenue is projected to total , an increase of 27% in 2025. “However, the market is now shifting to a return on investment (ROI) phase where inference revenues need to grow to multiples of training investments,” said George Brocklehurst, VP Analyst at Gartner.
Among them is a steep increase in HBM demand, a high-performance AI server memory solution. “Vendors are investing significantly in HBM production and packaging to match next-generation GPU/AI accelerator memory requirements,” said Brocklehurst.
HBM revenue is expected to increase by more than 284% in 2024 and 70% in 2025, reaching and , respectively. Gartner analysts predict that by 2026, more than 40% of HBM chips will facilitate AI inference workloads, compared to less than 30% today. This is mainly due to increased inference deployments and limited repurpose for training GPUs.