
Mistral AI launches Mistral 3, a new open-source LLM family optimized for cloud and edge with NVIDIA.
Mistral AI has announced the Mistral 3 family of open-source, multilingual, multimodal large language models, optimized for deployment across NVIDIA’s supercomputing and edge platforms. The flagship Mistral Large 3 model features 41B active parameters and a 256K context window, delivering high scalability, efficiency, and adaptability for enterprise AI workloads. The models leverage NVIDIA’s advanced hardware and inference frameworks, including TensorRT-LLM and SGLang, to achieve peak performance and lower per-token costs.
Mistral 3 is available on leading open-source platforms and cloud providers, with plans for deployment as NVIDIA NIM microservices. The release also includes nine smaller models designed for developers to run AI anywhere, from cloud to edge. This collaboration between Mistral AI and NVIDIA aims to democratize access to state-of-the-art AI, supporting a wide range of enterprise and developer use cases. The Mistral 3 family represents a significant step forward in open-source, efficient, and accessible large language models.
Source: NVIDIA Blog