Back

Mistral: Mixtral 8x22B Instruct

Mistral
Input: text
Output: text
Released: Apr 17, 2024Updated: Mar 28, 2025

Mistral's official instruct fine-tuned version of Mixtral 8x22B. It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include:

  • strong math, coding, and reasoning
  • large context length (64k)
  • fluency in English, French, Italian, German, and Spanish

See benchmarks on the launch announcement here. #moe

65,536 Token Context

Process and analyze large documents and conversations.

Advanced Coding

Improved capabilities in front-end development and full-stack updates.

Agentic Workflows

Autonomously navigate multi-step processes with improved reliability.

Available On

ProviderModel IDContextMax OutputInput CostOutput CostThroughputLatency
Nebius AI StudionebiusAiStudio66K-$0.40/M$1.20/M54.4 t/s194 ms
Fireworksfireworks66K-$0.90/M$0.90/M71.9 t/s466 ms
Mistralmistral66K-$2.00/M$6.00/M69.4 t/s282 ms
Standard Pricing
Input Tokens
$0.0000004

per 1K tokens

Output Tokens
$0.0000012

per 1K tokens

Do Work. With AI.