Back

Meta: Llama 3.2 11B Vision Instruct

Llama3
Input: text
Input: image
Output: text
Released: Sep 25, 2024Updated: Mar 28, 2025

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis.

Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research.

Click here for the original model card.

Usage of this model is subject to Meta's Acceptable Use Policy.

131,072 Token Context

Process and analyze large documents and conversations.

Advanced Coding

Improved capabilities in front-end development and full-stack updates.

Agentic Workflows

Autonomously navigate multi-step processes with improved reliability.

Vision Capabilities

Process and understand images alongside text inputs.

Available On

ProviderModel IDContextMax OutputInput CostOutput CostThroughputLatency
DeepInfradeepInfra131K16K$0.05/M$0.05/M11.4 t/s3171 ms
Cloudflarecloudflare131K-$0.05/M$0.68/M33.7 t/s266 ms
Lambdalambda131K131K$0.05/M$0.05/M110.1 t/s432 ms
inference.netinferenceNet16K16K$0.06/M$0.06/M31.1 t/s2120 ms
NovitaAInovitaAi33K-$0.06/M$0.06/M50.2 t/s791 ms
Togethertogether131K-$0.18/M$0.18/M140.3 t/s387 ms
Standard Pricing
Input Tokens
$0.000000049

per 1K tokens

Output Tokens
$0.000000049

per 1K tokens

Image Processing
$0.00007948

per image

Do Work. With AI.