Mistral AI released Mistral Small 4, its first model to unify specialized flagship capabilities into a single system. The model integrates the reasoning of Magistral, the multimodal functions of Pixtral, and the coding abilities of Devstral. It operates under the open-source Apache 2.0 license. Users can access a general assistant, reasoning engine, and multimodal tool without switching models.

The Mixture-of-Experts architecture contains 119 billion total parameters. It supports text and image inputs within a 256k context window. The system reduces latency by 40% compared to Mistral Small 3. Throughput has increased threefold over the previous version. Developers can now configure reasoning effort to balance speed and performance.