Mistral AI and NVIDIA Launch Mistral NeMo, a 12B Model with 128K Token Context Window

In a significant development for the AI community, Mistral AI has partnered with NVIDIA to announce the release of NeMo, a 12B parameter model boasting an impressive context window of up to 128,000 tokens. The Mistral NeMo model claims state-of-the-art performance in reasoning, world knowledge, and coding accuracy within its size category.

The collaboration between Mistral AI and NVIDIA has yielded a model that not only excels in performance but also prioritizes ease of use. Designed to be a seamless upgrade from Mistral 7B, Mistral NeMo utilizes a standard architecture that ensures compatibility and ease of integration for existing systems.

In a bid to promote widespread adoption and further research, Mistral AI has made both pre-trained base and instruction-tuned checkpoints available under the Apache 2.0 license. This open-source approach is expected to appeal to researchers and enterprises, potentially accelerating the model’s integration into diverse applications.

Advanced Features and Performance

One of the standout features of Mistral NeMo is its quantization awareness during training, which enables FP8 inference without compromising performance. This capability is crucial for organizations looking to deploy large language models efficiently.

Performance comparisons provided by Mistral AI highlight the capabilities of the Mistral NeMo base model against other recent open-source pre-trained models, such as Gemma 2 9B and Llama 3 8B. Mistral NeMo demonstrates superior performance, particularly in multilingual applications. According to Mistral AI, the model is strong in English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi.

Introducing Tekken Tokenizer

Mistral NeMo introduces Tekken, a new tokenizer based on Tiktoken. Trained on over 100 languages, Tekken offers improved compression efficiency for both natural language text and source code compared to the SentencePiece tokenizer used in previous Mistral models. Mistral AI reports that Tekken is approximately 30% more efficient at compressing source code and several major languages, with even greater gains for Korean and Arabic. Additionally, Tekken outperforms the Llama 3 tokenizer in text compression for about 85% of all languages, potentially giving Mistral NeMo an edge in multilingual applications.

Accessibility and Integration

The model’s weights are now available on HuggingFace for both the base and instruct versions. Developers can start experimenting with Mistral NeMo using the mistral-inference tool and adapt it with mistral-finetune. For users of Mistral’s platform, the model is accessible under the name open-mistral-nemo.

In collaboration with NVIDIA, Mistral NeMo is also packaged as an NVIDIA NIM inference microservice, available through ai.nvidia.com. This integration could streamline deployment for organizations already invested in NVIDIA’s AI ecosystem.

Conclusion

The release of Mistral NeMo marks a significant advancement in the field of AI. By combining high performance, multilingual capabilities, and open-source availability, Mistral AI and NVIDIA are positioning this model as a versatile tool for a wide range of AI applications across various industries and research fields.

For more information on Mistral NeMo and its applications, visit Mistral AI and NVIDIA AI.

Share this 🚀