Back
DeepSeek-Coder-V2
Other
Input: text
Output: text
Released: May 14, 2024•Updated: Mar 28, 2025
DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model. It is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens.
The original V1 model was trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. It was pre-trained on project-level code corpus by employing a extra fill-in-the-blank task.
128,000 Token Context
Process and analyze large documents and conversations.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
Nebius AI Studio | nebiusAiStudio | 128K | - | $0.04/M | $0.12/M | 122.0 t/s | 234 ms |
Standard Pricing
Input Tokens
$0.00000004
per 1K tokens
Output Tokens
$0.00000012
per 1K tokens