Back

Qwen: Qwen2.5 Coder 7B Instruct

Qwen
Input: text
Output: text
Released: Apr 15, 2025Updated: Apr 15, 2025

Qwen2.5-Coder-7B-Instruct is a 7B parameter instruction-tuned language model optimized for code-related tasks such as code generation, reasoning, and bug fixing. Based on the Qwen2.5 architecture, it incorporates enhancements like RoPE, SwiGLU, RMSNorm, and GQA attention with support for up to 128K tokens using YaRN-based extrapolation. It is trained on a large corpus of source code, synthetic data, and text-code grounding, providing robust performance across programming languages and agentic coding workflows.

This model is part of the Qwen2.5-Coder family and offers strong compatibility with tools like vLLM for efficient deployment. Released under the Apache 2.0 license.

32,768 Token Context

Process and analyze large documents and conversations.

Advanced Coding

Improved capabilities in front-end development and full-stack updates.

Agentic Workflows

Autonomously navigate multi-step processes with improved reliability.

Available On

ProviderModel IDContextMax OutputInput CostOutput CostThroughputLatency
Nebius AI StudionebiusAiStudio33K-$0.01/M$0.03/M214.8 t/s607 ms
Standard Pricing
Input Tokens
$0.00000001

per 1K tokens

Output Tokens
$0.00000003

per 1K tokens

Do Work. With AI.