blocksandfiles.com, Aug. 16, 2021 –
Extended high-bandwidth memory (HBM2E) is barely here, yet Rambus already has a third-generation HBM subsystem ready for use, and it goes more than twice as fast.
Server and GPU memory capacity and speed is set to rocket up from today's socket-connected X86 server DRAM. NVIDIA GPUs have already abandoned that and are using HBM – stacked memory dies connected to physically close GPU using an interposer instead of socket channels.
We can view an HBM+interposer+processor system as a using chiplets, with the HBM part being one chiplet, and the processor another, both connected via the interposer.