Back
Mistral: Mistral 7B Instruct v0.3
Mistral
Input: text
Output: text
Released: May 27, 2024•Updated: Apr 28, 2025
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.
An improved version of Mistral 7B Instruct v0.2, with the following changes:
- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling
NOTE: Support for function calling depends on the provider.
32,768 Token Context
Process and analyze large documents and conversations.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
Enfer | enfer | 33K | 16K | $0.03/M | $0.05/M | 139.4 t/s | 4727 ms |
DeepInfra | deepInfra | 33K | 16K | $0.03/M | $0.06/M | 125.9 t/s | 595 ms |
NextBit | nextBit | 33K | - | $0.03/M | $0.06/M | 78.3 t/s | 1359 ms |
NovitaAI | novitaAi | 33K | - | $0.03/M | $0.06/M | 158.1 t/s | 735 ms |
Together | together | 33K | 4K | $0.20/M | $0.20/M | 237.3 t/s | 577 ms |
Standard Pricing
Input Tokens
$0.000000028
per 1K tokens
Output Tokens
$0.000000054
per 1K tokens