Back

Mistral: Codestral Mamba

Mistral
Input: text
Output: text
Released: Jul 19, 2024Updated: Mar 28, 2025

A 7.3B parameter Mamba-based model designed for code and reasoning tasks.

  • Linear time inference, allowing for theoretically infinite sequence lengths
  • 256k token context window
  • Optimized for quick responses, especially beneficial for code productivity
  • Performs comparably to state-of-the-art transformer models in code and reasoning tasks
  • Available under the Apache 2.0 license for free use, modification, and distribution

262,144 Token Context

Process and analyze large documents and conversations.

Advanced Coding

Improved capabilities in front-end development and full-stack updates.

Agentic Workflows

Autonomously navigate multi-step processes with improved reliability.

Available On

ProviderModel IDContextMax OutputInput CostOutput CostThroughputLatency
Mistralmistral262K-$0.25/M$0.25/M111.0 t/s419 ms
Standard Pricing
Input Tokens
$0.00000025

per 1K tokens

Output Tokens
$0.00000025

per 1K tokens

Do Work. With AI.