Back

Mistral: Mistral Nemo

Mistral
Input: text
Output: text
Released: Jul 19, 2024Updated: Mar 28, 2025

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA.

The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi.

It supports function calling and is released under the Apache 2.0 license.

131,072 Token Context

Process and analyze large documents and conversations.

Advanced Coding

Improved capabilities in front-end development and full-stack updates.

Agentic Workflows

Autonomously navigate multi-step processes with improved reliability.

Available On

ProviderModel IDContextMax OutputInput CostOutput CostThroughputLatency
KlusterklusterAi131K131K$0.01/M$0.03/M94.0 t/s918 ms
DeepInfradeepInfra131K16K$0.01/M$0.03/M43.6 t/s308 ms
Enferenfer131K66K$0.02/M$0.07/M36.6 t/s1655 ms
NextBitnextBit128K-$0.03/M$0.07/M38.3 t/s1712 ms
InferenceNetinferenceNet16K16K$0.04/M$0.10/M63.3 t/s1008 ms
Parasailparasail131K131K$0.04/M$0.11/M79.2 t/s722 ms
NebiusnebiusAiStudio128K-$0.04/M$0.12/M43.1 t/s468 ms
NovitanovitaAi60K32K$0.04/M$0.17/M55.8 t/s1037 ms
Atomaatoma128K80K$0.10/M$0.10/M73.8 t/s669 ms
InoCloudinoCloud131K131K$0.14/M$0.14/M100.9 t/s1243 ms
Mistralmistral131K-$0.15/M$0.15/M119.9 t/s245 ms
Azureazure128K-$0.30/M$0.30/M100.3 t/s1144 ms
Standard Pricing
Input Tokens
$0.00000001

per 1K tokens

Output Tokens
$0.000000029

per 1K tokens

Do Work. With AI.