Rising capital expenditure among major hyperscalers is increasingly driven by surging memory and storage costs rather than purely compute expansion. As AI models scale, high-performance HBM and NAND flash have transitioned from commodities to critical, non-fungible infrastructure components. Samsung and SanDisk are capturing this value, reporting record margins as they secure multi-year supply agreements with financial guarantees. The shift toward agentic AI, exemplified by DeepSeek’s SSD-centric inference architecture, necessitates massive, low-latency storage for KV cache, further cementing the importance of NAND. While hyperscalers like Google, Amazon, and Meta are developing custom silicon to mitigate costs and improve operating margins, they remain bottlenecked by the supply of these essential components. Consequently, the industry is witnessing a fundamental change in the economics of AI infrastructure, where memory suppliers exert significant leverage over the broader technology ecosystem.
Sign in to continue reading, translating and more.
Continue