xAI Activates World's Largest AI Training Cluster – Memphis Supercluster With 100 Thousand NVIDIA H100



xAI, Elon Musk's new artificial intelligence technology company has recently reportedly activated their first machine learning/AI training supercomputer cluster, named the Memphis Supercluster.


NVIDIA H100 is used because Elon Musk doesn't want to wait for NVIDIA B100 and B200 which will only be sold at the end of this year, while NVIDIA H200 is very difficult to buy for now.



Elon Musk said that this Memphis Supercluster consists of 100 thousand NVIDIA H100 graphics cards cooled with a liquid cooling system and connected to an RDMA fabric to develop a very powerful AI training system cluster.

It is expected that xAI will be ready to train their Grok 3 multi-language model (LLM) around the end of this year with content from X/Twitter as well as open source data. The Memphis Supercluster is also reported to be the largest and most powerful AI training supercomputer cluster to date, even when compared to the Aurora, Frontier and Fugaku data centers.

Previous Post Next Post

Contact Form