Back

Mistral: Mixtral 8x7B Instruct

Mistral
Input: text
Output: text
Released: Dec 10, 2023Updated: Mar 28, 2025

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters.

Instruct model fine-tuned by Mistral. #moe

32,768 Token Context

Process and analyze large documents and conversations.

Advanced Coding

Improved capabilities in front-end development and full-stack updates.

Agentic Workflows

Autonomously navigate multi-step processes with improved reliability.

Available On

ProviderModel IDContextMax OutputInput CostOutput CostThroughputLatency
Nebius AI StudionebiusAiStudio33K-$0.08/M$0.24/M125.2 t/s260 ms
DeepInfradeepInfra33K16K$0.24/M$0.24/M126.9 t/s507 ms
Togethertogether33K2K$0.60/M$0.60/M77.6 t/s378 ms
Standard Pricing
Input Tokens
$0.00000008

per 1K tokens

Output Tokens
$0.00000024

per 1K tokens

Do Work. With AI.