Meta sheds more light on how it is evolving Llama 3 training — it relies for now on almost 50,000 Nvidia H100 GPU, but how long before Meta switches to its own AI chip?

0
7

Meta has unveiled details about its AI training infrastructure, revealing that it currently relies on almost 50,000 Nvidia H100 GPUs to train its open source Llama 3 LLM. 

The company says it will have over 350,000 Nvidia H100 GPUs in service by the end of 2024, and the computing power equivalent to nearly 600,000 H100s when combined with hardware from other sources.

LEAVE A REPLY

Please enter your comment!
Please enter your name here