Back
DeepSeek: R1 Distill Qwen 1.5B
Other
Input: text
Output: text
Released: Jan 31, 2025•Updated: May 2, 2025
DeepSeek R1 Distill Qwen 1.5B is a distilled large language model based on Qwen 2.5 Math 1.5B, using outputs from DeepSeek R1. It's a very small and efficient model which outperforms GPT 4o 0513 on Math Benchmarks.
Other benchmark results include:
- AIME 2024 pass@1: 28.9
- AIME 2024 cons@64: 52.7
- MATH-500 pass@1: 83.9
The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.
131,072 Token Context
Process and analyze large documents and conversations.
Hybrid Reasoning
Choose between rapid responses and extended, step-by-step processing for complex tasks.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
Together | together | 131K | 33K | $0.18/M | $0.18/M | 437.1 t/s | 288 ms |
Standard Pricing
Input Tokens
$0.00000018
per 1K tokens
Output Tokens
$0.00000018
per 1K tokens