Back
Mistral: Mixtral 8x22B Instruct
Mistral
Input: text
Output: text
Released: Apr 17, 2024•Updated: Mar 28, 2025
Mistral's official instruct fine-tuned version of Mixtral 8x22B. It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include:
- strong math, coding, and reasoning
- large context length (64k)
- fluency in English, French, Italian, German, and Spanish
See benchmarks on the launch announcement here. #moe
65,536 Token Context
Process and analyze large documents and conversations.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
Nebius AI Studio | nebiusAiStudio | 66K | - | $0.40/M | $1.20/M | 54.4 t/s | 194 ms |
Fireworks | fireworks | 66K | - | $0.90/M | $0.90/M | 71.9 t/s | 466 ms |
Mistral | mistral | 66K | - | $2.00/M | $6.00/M | 69.4 t/s | 282 ms |
Standard Pricing
Input Tokens
$0.0000004
per 1K tokens
Output Tokens
$0.0000012
per 1K tokens