Yesterday the NVIDIA Blackwell B200 was launched as the world's most powerful artificial intelligence (AI) chip. Each B200 GPU offers 20 petaflops of FP4 capability with 208 billion transistors.
NVIDIA Blackwell B200 (left) with Grace Hopper CPU
NVIDIA also offers the NVIDIA GB200 Grace Blackwell Superchip which combines two NVIDIA B200 GPUs with one NVIDIA Grace CPU. Through this new system, training LLM models with 1 trillion parameters can be done with up to 25X reduced cost and energy consumption compared to the previous generation.
By comparison, NVIDIA says previously training an LLM with 1.8 trillion parameters required 8000 Hopper GPUs as well as 15 MW of electricity. But Blackwell's 2000 GPU can do the same process with only 4 MW of electrical power. Overall the ability of the GB200 is about 7X the ability of the H100 produced by NVIDIA before.
With the announcement of this new system, NVIDIA also launched the NVIDIA GB200 NVL72 which is a system that contains 36 NVIDIA GB200 Grace Blackwell Superchips. NVL72 performance is up to 30X the ability of NVIDIA H100 Tensor Core GPU. Each NBL72 has up to 1.4 exaFLOP AI processing capabilities and 30TB of memory. It will be the base unit for NVIDIA's latest DGX SuperPOD.
Early customers of NVIDIA Blackwell are Amazon Web Services, Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla and xAI.