Back
DeepSeek: R1 Distill Llama 70B
Llama3
Input: text
Output: text
Released: Jan 23, 2025•Updated: May 2, 2025
DeepSeek R1 Distill Llama 70B is a distilled large language model based on Llama-3.3-70B-Instruct, using outputs from DeepSeek R1. The model combines advanced distillation techniques to achieve high performance across multiple benchmarks, including:
- AIME 2024 pass@1: 70.0
- MATH-500 pass@1: 94.5
- CodeForces Rating: 1633
The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.
131,072 Token Context
Process and analyze large documents and conversations.
Hybrid Reasoning
Choose between rapid responses and extended, step-by-step processing for complex tasks.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
DeepInfra | deepInfra | 131K | 16K | $0.10/M | $0.40/M | 32.3 t/s | 452 ms |
InferenceNet | inferenceNet | 128K | 16K | $0.10/M | $0.40/M | 15.1 t/s | 1278 ms |
Lambda | lambda | 131K | 131K | $0.20/M | $0.60/M | 66.3 t/s | 350 ms |
Phala | phala | 131K | - | $0.20/M | $0.70/M | 34.2 t/s | 645 ms |
GMICloud | gmiCloud | 131K | - | $0.25/M | $0.75/M | 34.5 t/s | 993 ms |
Nebius | nebiusAiStudio | 131K | - | $0.25/M | $0.75/M | 59.2 t/s | 495 ms |
SambaNova | sambaNova | 131K | 4K | $0.70/M | $1.40/M | 234.6 t/s | 1735 ms |
Groq | groq | 131K | 131K | $0.75/M | $0.99/M | 299.1 t/s | 396 ms |
Novita | novitaAi | 32K | 32K | $0.80/M | $0.80/M | 32.6 t/s | 787 ms |
Together | together | 131K | 33K | $2.00/M | $2.00/M | 105.6 t/s | 805 ms |
Cerebras | cerebras | 32K | 32K | $2.20/M | $2.50/M | 2235.5 t/s | 525 ms |
Standard Pricing
Input Tokens
$0.0000001
per 1K tokens
Output Tokens
$0.0000004
per 1K tokens