xAI: Grok 3 Mini Beta
Grok 3 Mini is a lightweight, smaller thinking model. Unlike traditional models that generate answers immediately, Grok 3 Mini thinks before responding. It’s ideal for reasoning-heavy tasks that don’t demand extensive domain knowledge, and shines in math-specific and quantitative use cases, such as solving challenging puzzles or math problems.
Transparent "thinking" traces accessible. Defaults to low reasoning, can boost with setting reasoning: { effort: "high" }
Note: That there are two xAI endpoints for this model. By default when using this model we will always route you to the base endpoint. If you want the fast endpoint you can add provider: { sort: throughput}, to sort by throughput instead.
131,072 Token Context
Process and analyze large documents and conversations.
Hybrid Reasoning
Choose between rapid responses and extended, step-by-step processing for complex tasks.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
xAI | xAi | 131K | - | $0.30/M | $0.50/M | 856.0 t/s | 7054 ms |
xAI Fast | xAiFast | 131K | - | $0.60/M | $4.00/M | 194.5 t/s | 1605 ms |
per 1K tokens
per 1K tokens