NVIDIA this morning unveiled their latest artificial intelligence (AI) chip under the NVIDIA GH200 Grace Hopper Superchip platform with faster HBM3e memory. It is an update to the GH200 Grace Hopper Superchip which was previously only equipped with HMB3 memory.
This makes this chip the first to use HBM3e memory with speeds reaching 10TB/s which is 50% faster than HBM3. According to NVIDIA, an increase in memory capacity of up to 3.5 times and memory bandwidth up to 3 times is achieved compared to the previous generation platform.
Each NVIDIA GH200 Grace Hopper has 144 ARM Neoverse cores and 282GB of HBM3e RAM. The AI capability of this chip reaches 8 Petaflops. Because the improvements are only on memory, this "new chip" still maintains the use of the Grace CPU and GPU of the GH100. Production is underway with the latest platform only to be offered to customers in Q2 2024.
Production of the NVIDIA GH200 Grace Hopper platform with HBM3 will continue with it being used on the NVIDIA DGX GH200 system equipped with 144 Terabyte memory. All Grace Hopper GH200 platforms are developed to train AI models.