Back

Mistral: Mistral 7B Instruct v0.3

Mistral
Input: text
Output: text
Released: May 27, 2024Updated: Apr 28, 2025

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.

An improved version of Mistral 7B Instruct v0.2, with the following changes:

  • Extended vocabulary to 32768
  • Supports v3 Tokenizer
  • Supports function calling

NOTE: Support for function calling depends on the provider.

32,768 Token Context

Process and analyze large documents and conversations.

Advanced Coding

Improved capabilities in front-end development and full-stack updates.

Agentic Workflows

Autonomously navigate multi-step processes with improved reliability.

Available On

ProviderModel IDContextMax OutputInput CostOutput CostThroughputLatency
Enferenfer33K16K$0.03/M$0.05/M139.4 t/s4727 ms
DeepInfradeepInfra33K16K$0.03/M$0.06/M125.9 t/s595 ms
NextBitnextBit33K-$0.03/M$0.06/M78.3 t/s1359 ms
NovitaAInovitaAi33K-$0.03/M$0.06/M158.1 t/s735 ms
Togethertogether33K4K$0.20/M$0.20/M237.3 t/s577 ms
Standard Pricing
Input Tokens
$0.000000028

per 1K tokens

Output Tokens
$0.000000054

per 1K tokens

Do Work. With AI.