Back
Nous: Hermes 2 Mixtral 8x7B DPO
Mistral
Input: text
Output: text
Released: Jan 16, 2024•Updated: Mar 28, 2025
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the Mixtral 8x7B MoE LLM.
The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
#moe
32,768 Token Context
Process and analyze large documents and conversations.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
Together | together | 33K | 2K | $0.60/M | $0.60/M | 110.9 t/s | 417 ms |
Standard Pricing
Input Tokens
$0.0000006
per 1K tokens
Output Tokens
$0.0000006
per 1K tokens