DeepSeek: R1 Distill Qwen 32B
DeepSeek R1 Distill Qwen 32B is a distilled large language model based on Qwen 2.5 32B, using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.
131,072 Token Context
Process and analyze large documents and conversations.
Hybrid Reasoning
Choose between rapid responses and extended, step-by-step processing for complex tasks.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
DeepInfra | deepInfra | 131K | 16K | $0.12/M | $0.18/M | 49.6 t/s | 510 ms |
Novita | novitaAi | 64K | 32K | $0.30/M | $0.30/M | 22.0 t/s | 1800 ms |
GMICloud | gmiCloud | 131K | - | $0.50/M | $0.90/M | 37.6 t/s | 1457 ms |
Cloudflare | cloudflare | 80K | - | $0.50/M | $4.88/M | 36.5 t/s | 867 ms |
per 1K tokens
per 1K tokens