Back
Mistral: Codestral Mamba
Mistral
Input: text
Output: text
Released: Jul 19, 2024•Updated: Mar 28, 2025
A 7.3B parameter Mamba-based model designed for code and reasoning tasks.
- Linear time inference, allowing for theoretically infinite sequence lengths
- 256k token context window
- Optimized for quick responses, especially beneficial for code productivity
- Performs comparably to state-of-the-art transformer models in code and reasoning tasks
- Available under the Apache 2.0 license for free use, modification, and distribution
262,144 Token Context
Process and analyze large documents and conversations.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
Mistral | mistral | 262K | - | $0.25/M | $0.25/M | 111.0 t/s | 419 ms |
Standard Pricing
Input Tokens
$0.00000025
per 1K tokens
Output Tokens
$0.00000025
per 1K tokens