Back
Mistral: Mixtral 8x7B Instruct
Mistral
Input: text
Output: text
Released: Dec 10, 2023•Updated: Mar 28, 2025
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters.
Instruct model fine-tuned by Mistral. #moe
32,768 Token Context
Process and analyze large documents and conversations.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
Nebius AI Studio | nebiusAiStudio | 33K | - | $0.08/M | $0.24/M | 125.2 t/s | 260 ms |
DeepInfra | deepInfra | 33K | 16K | $0.24/M | $0.24/M | 126.9 t/s | 507 ms |
Together | together | 33K | 2K | $0.60/M | $0.60/M | 77.6 t/s | 378 ms |
Standard Pricing
Input Tokens
$0.00000008
per 1K tokens
Output Tokens
$0.00000024
per 1K tokens