NVIDIA Unveils Nemotron 340B

No Image
No Image
Source Link

n Friday, NVIDIA released Nemotron 340B, an open Large Language Model (LLM) designed to match GPT-4 (0314). Alongside the model, they published a Technical Report detailing the innovative training methods and unique features of Nemotron 340B. The model underwent a two-phase pretraining process, initially on 8 trillion tokens, followed by higher quality tokens and instruction data. Fine-tuning included 800,000 coding samples and 200,000 diverse task samples. The training leveraged Direct Preference Optimization and Reward-aware Preference Optimization. Insights reveal that 98% of the post-training data was synthetically generated, with a focus on English, multilingual data, and source code. The model was trained using 6,144 H100 GPUs, achieving approximately 42% machine utilization efficiency. NVIDIA's iterative data generation and meticulous data distribution adjustments significantly enhanced the model's quality. Only 20,000 human-annotated samples were used, primarily for reward modeling. The report also includes comprehensive details on the synthetic data generation pipeline.