230k GPUs, including 30k GB200s, are operational for training Grok
@xAI in a single supercluster called Colossus 1 (inference is done by our cloud providers).
At Colossus 2, the first batch of 550k GB200s & GB300s, also for training, start going online in a few weeks.
As Jensen Huang has stated,
@xAI is unmatched in speed. It’s not even close.