Back

Mistral: Mixtral 8x22B Instruct

Mistral
Input: text
Output: text
Released: Apr 17, 2024Updated: Mar 28, 2025

Mistral's official instruct fine-tuned version of Mixtral 8x22B. It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include:

  • strong math, coding, and reasoning
  • large context length (64k)
  • fluency in English, French, Italian, German, and Spanish

See benchmarks on the launch announcement here. #moe

65,536 Token Context

Process and analyze large documents and conversations.

Advanced Coding

Improved capabilities in front-end development and full-stack updates.

Agentic Workflows

Autonomously navigate multi-step processes with improved reliability.

Available On

ProviderModel IDContextMax OutputInput CostOutput CostThroughputLatency
Fireworksfireworks66K-$0.90/M$0.90/M87.0 t/s407 ms
Mistralmistral66K-$2.00/M$6.00/M54.3 t/s318 ms
Standard Pricing
Input Tokens
$0.0000009

per 1K tokens

Output Tokens
$0.0000009

per 1K tokens

Do Work. With AI.