AMD Launches Instinct MI300X GPU And MI300A APU For AI Needs

 


After NVIDIA, Amazon and Microsoft launched their latest AI chips this year, it's AMD's turn to offer their products, namely the Instinct MI300X GPU and the MI300A APU. According to AMD, their latest chip has better performance than the NVIDIA H100 which is currently the most frequently used AI chip for training AI models.



AMD Instinct MI300X is a GPU built using AMD CDNA 3 architecture. Compared to the previous Instinct MI250X GPU, MI300X has 40% more CUs, 1.5x larger memory capacity and is equipped with up to 192 GB HBM3 memory with a bandwidth of up to 5.3 TB/ s. Through the AMD Instinct platform, eight MI300Xi can be combined. The AMD Instinct platform offers 1.6x better performance than the NVIDIA H100 HGX and is the only platform that runs models with 70 billion parameters.



APU MI300A is an accelerated processing unit (APU) that combines AMD ZEN 4 CPU core and CDNA 3 GPU with 128GB HBM3 memory. When compared to the Instinct MI250X, the MI300A offers a 1.9X performance increase for every Watt. The El Capitan supercomputer at Lawrence Livermore National Laboratory has been confirmed to use this APU and it will be the second with exascale processing capabilities.


AMD Instinct Architecture CU GPU CPU Core Memory Memory bandwidth Process nodes

MI300A AMD CDNA 3 228 24 “Zen 4” 128GB HBM3 5.3 TB/s 5nm / 6nm

MI300X AMD CDNA 3 304 N/A 192GB HBM3 5.3 TB/s 5nm / 6nm

Platform AMD CDNA 3 2,432 N/A 1.5 TB HMB3 5.3 TB/s per OAM 5nm / 6nm

Previous Post Next Post

Contact Form