Qwen: Qwen3 32B
Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, coding, and logical inference, and a "non-thinking" mode for faster, general-purpose conversation. The model demonstrates strong performance in instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling.
40,960 Token Context
Process and analyze large documents and conversations.
Hybrid Reasoning
Choose between rapid responses and extended, step-by-step processing for complex tasks.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
DeepInfra | deepInfra | 41K | - | $0.10/M | $0.30/M | 43.2 t/s | 1041 ms |
Nebius | nebiusAiStudio | 41K | - | $0.10/M | $0.30/M | 42.8 t/s | 765 ms |
Lambda | lambda | 41K | 41K | $0.10/M | $0.30/M | 39.2 t/s | 541 ms |
Novita | novitaAi | 41K | 20K | $0.10/M | $0.45/M | 27.1 t/s | 1063 ms |
Parasail | parasail | 41K | 41K | $0.10/M | $0.50/M | 41.2 t/s | 737 ms |
GMICloud | gmiCloud | 33K | - | $0.10/M | $0.60/M | 50.8 t/s | 1154 ms |
Nebius | nebiusAiStudio | 41K | - | $0.20/M | $0.60/M | 141.5 t/s | 573 ms |
Cerebras | cerebras | 33K | - | $0.40/M | $0.80/M | 2214.0 t/s | 455 ms |
SambaNova | sambaNova | 33K | 4K | $0.40/M | $0.80/M | 317.0 t/s | 682 ms |
per 1K tokens
per 1K tokens