Back

NVIDIA: Llama 3.1 Nemotron Ultra 253B v1 (free)

Llama3
Input: text
Output: text
Released: Apr 8, 2025Updated: May 20, 2025

Llama-3.1-Nemotron-Ultra-253B-v1 is a large language model (LLM) optimized for advanced reasoning, human-interactive chat, retrieval-augmented generation (RAG), and tool-calling tasks. Derived from Meta’s Llama-3.1-405B-Instruct, it has been significantly customized using Neural Architecture Search (NAS), resulting in enhanced efficiency, reduced memory usage, and improved inference latency. The model supports a context length of up to 128K tokens and can operate efficiently on an 8x NVIDIA H100 node.

Note: you must include detailed thinking on in the system prompt to enable reasoning. Please see Usage Recommendations for more.

131,072 Token Context

Process and analyze large documents and conversations.

Advanced Coding

Improved capabilities in front-end development and full-stack updates.

Agentic Workflows

Autonomously navigate multi-step processes with improved reliability.

Available On

ProviderModel IDContextMax OutputInput CostOutput CostThroughputLatency
NebiusnebiusAiStudio131K-$0.60/M$1.80/M41.8 t/s582 ms
Standard Pricing

Do Work. With AI.