Qwen2.5 7B Instruct
Qwen2.5 7B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2:
-
Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.
-
Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.
-
Long-context Support up to 128K tokens and can generate up to 8K tokens.
-
Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
Usage of this model is subject to Tongyi Qianwen LICENSE AGREEMENT.
32,768 Token Context
Process and analyze large documents and conversations.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
NextBit | nextBit | 33K | - | $0.04/M | $0.10/M | 40.2 t/s | 1519 ms |
DeepInfra | deepInfra | 33K | 16K | $0.05/M | $0.10/M | 67.8 t/s | 271 ms |
nCompass | nCompass | 33K | 33K | $0.20/M | $0.20/M | 149.8 t/s | 216 ms |
Together | together | 33K | 2K | $0.30/M | $0.30/M | 177.5 t/s | 263 ms |
per 1K tokens
per 1K tokens