Nvidia’s Blackwell Chips Revolutionize AI Model Training Time, Set New Records

Nvidia’s new Blackwell chips have significantly accelerated the training process of large AI models. Recent benchmarks from MLCommons demonstrate that the Blackwell architecture, used by Nvidia, achieved a record-breaking 27-minute training time for Meta’s massive Llama 3.1 405B model, requiring only 2,496 Blackwell GPUs compared to an order of magnitude more with previous Hopper chips. This remarkable speedup translates to major time and cost savings for organizations training complex AI models. 27-minute training times are a first in MLCommons benchmarks for the level of scale these chips operate at.