Back

Mistral: Ministral 3B

Mistral
Input: text
Output: text
Released: Oct 17, 2024Updated: Mar 28, 2025

Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on most benchmarks. Supporting up to 128k context length, it’s ideal for orchestrating agentic workflows and specialist tasks with efficient inference.

131,072 Token Context

Process and analyze large documents and conversations.

Advanced Coding

Improved capabilities in front-end development and full-stack updates.

Agentic Workflows

Autonomously navigate multi-step processes with improved reliability.

Available On

ProviderModel IDContextMax OutputInput CostOutput CostThroughputLatency
Mistralmistral131K-$0.04/M$0.04/M240.1 t/s180 ms
Standard Pricing
Input Tokens
$0.00000004

per 1K tokens

Output Tokens
$0.00000004

per 1K tokens

Do Work. With AI.