Back

DeepSeek: R1 Distill Qwen 32B

Qwen
Input: text
Output: text
Released: Jan 29, 2025Updated: May 29, 2025

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on Qwen 2.5 32B, using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

131,072 Token Context

Process and analyze large documents and conversations.

Hybrid Reasoning

Choose between rapid responses and extended, step-by-step processing for complex tasks.

Advanced Coding

Improved capabilities in front-end development and full-stack updates.

Agentic Workflows

Autonomously navigate multi-step processes with improved reliability.

Available On

ProviderModel IDContextMax OutputInput CostOutput CostThroughputLatency
DeepInfradeepInfra131K16K$0.12/M$0.18/M47.4 t/s478 ms
NovitanovitaAi64K32K$0.30/M$0.30/M21.9 t/s1827 ms
GMICloudgmiCloud131K-$0.50/M$0.90/M--
Cloudflarecloudflare80K-$0.50/M$4.88/M36.5 t/s599 ms
Standard Pricing
Input Tokens
$0.00000012

per 1K tokens

Output Tokens
$0.00000018

per 1K tokens

Do Work. With AI.