- Musk’s xAI activates massive GPU cluster
- It rivals Meta’s AI capabilities
- Experts anticipate advancements in xAI’s Grok chatbot
Elon’s Got the Chips
Elon Musk’s AI company xAI has brought its Colossus training cluster online, boasting 100,000 Nvidia H100 GPUs.
Built in just 122 days, Musk claims it’s “the most powerful AI training system in the world.” The cluster, located in Memphis, Tennessee, is set to double in size within months.
Meta’s Lead Narrows
This move puts xAI in a stronger position to compete against Mark Zuckerberg’s Meta.
Although Meta reportedly has 350,000 H100 GPUs, xAI’s concentrated 100,000-chip cluster outpaces the 16,000 GPUs used to train Meta’s largest Llama 3 model.
Grok 2.0 on the Horizon
Industry experts are buzzing about xAI’s potential.
Sequoia partner Shaun Maguire suggests the upcoming Grok-2 chatbot is “roughly at parity” with top models. However, questions remain about xAI’s product strategy despite its impressive hardware.