Back

Meta: LlamaGuard 2 8B

Llama3
Input: text
Output: text
Released: May 13, 2024Updated: Mar 28, 2025

This safeguard model has 8B parameters and is based on the Llama 3 family. Just like is predecessor, LlamaGuard 1, it can do both prompt and response classification.

LlamaGuard 2 acts as a normal LLM would, generating text that indicates whether the given input/output is safe/unsafe. If deemed unsafe, it will also share the content categories violated.

For best results, please use raw prompt input or the /completions endpoint, instead of the chat API.

It has demonstrated strong performance compared to leading closed-source models in human evaluations.

To read more about the model release, click here. Usage of this model is subject to Meta's Acceptable Use Policy.

8,192 Token Context

Process and analyze large documents and conversations.

Advanced Coding

Improved capabilities in front-end development and full-stack updates.

Agentic Workflows

Autonomously navigate multi-step processes with improved reliability.

Available On

ProviderModel IDContextMax OutputInput CostOutput CostThroughputLatency
Togethertogether8K-$0.20/M$0.20/M75.6 t/s1116 ms
Standard Pricing
Input Tokens
$0.0000002

per 1K tokens

Output Tokens
$0.0000002

per 1K tokens

Do Work. With AI.