The RAM that’s being produced at scale for parallel computation is HBM, what the capacity is going towards. It’s not in the form of DIMMs — you can’t take it after it’s been used and stick it into a PC’s motherboard.
Inside NVIDIA Blackwell Ultra: The Chip Powering the AI Factory Era
Blackwell Ultra doesn’t just scale compute—it scales memory capacity to meet the demands of the largest AI models. With 288 GB of HBM3e per GPU, it offers 3.6x more on-package memory than H100 and 50% more than Blackwell, as shown in Figure 5.
True. So in six months, the market will be flooded with cheap, barely-used “AI” server hardware no one wants, and RAM for PCs will still be stupid expensive, because we live in the stupidest timeline.
The RAM that’s being produced at scale for parallel computation is HBM, what the capacity is going towards. It’s not in the form of DIMMs — you can’t take it after it’s been used and stick it into a PC’s motherboard.
EDIT:
https://developer.nvidia.com/blog/inside-nvidia-blackwell-ultra-the-chip-powering-the-ai-factory-era/
True. So in six months, the market will be flooded with cheap, barely-used “AI” server hardware no one wants, and RAM for PCs will still be stupid expensive, because we live in the stupidest timeline.