Inteligência em modelos de IA

Llama 3.2 3b Instruct

meta/llama-3-2-3b-instruct

Por Meta · família: llama · lançado 2024-09-18 · data de conhecimento: 2023-12

$0.010
Entrada / 1M tokens
$0.014
Saída / 1M tokens
131K
Janela de contexto
8K
Saída máxima

Prices in USD per 1M tokens. Unknown means the provider does not publish per-token pricing.

Capacidades

Tool callingRaciocínioSaída estruturadaAnexosPesos abertos? Controle de temperatura
Modalidades: entrada text, pdf · saída text

Model fit scores

0–100 · higher is better

These scores reward declared capabilities, context size, price and provider availability — they are not benchmark results. Use them as a directional signal alongside your own evaluation.

Coding10
  • Tool calling0/40
  • Structured output0/20
  • Reasoning0/10
  • Context window (100K → 1M)2/20
  • Provider availability8/10
Agents13
  • Tool calling0/35
  • Structured output0/25
  • Reasoning0/15
  • Output token limit5/15
  • Provider availability8/10
JSON / structured output20
  • Structured output / JSON mode0/50
  • Tool calling0/20
  • Temperature control0/10
  • Price-friendly for high-volume20/20
Cost efficiency100
  • Headline price (log-scaled)95/95
  • Has prompt-cache pricing5/5
Long context46
  • Context window (100K → 2M)36/90
  • Has published price for full window10/10
Production-readiness94
  • Number of independent providers40/40
  • Has published per-token price20/20
  • Context window ≥ 8K15/15
  • No data inconsistencies across providers4/10
  • Official model (not derivative)15/15

Cost Efficiency Index

Open full calculator →

Estimated cost using the recommended provider's headline rate. Each scenario fixes average input/output tokens — the assumptions are shown in the third column.

ScenarioCostAssumption
RAG answer
per 1,000 RAG answers
$0.06
< $0.01 per request
5K input tokens (query + 4 retrieved chunks of ~1K each) and a 500-token answer. Typical SaaS knowledge-base bot.
Support ticket triage
per 10,000 tickets
$0.11
< $0.01 per request
1K input tokens (ticket body + system prompt) and a 100-token JSON classification reply. High-volume customer support.
Data extraction
per 1,000 documents
$0.03
< $0.01 per request
2K input tokens (a single document page) and a 500-token JSON extraction. ETL / invoice / form pipelines.
Code review
per 1,000 PRs
$0.09
< $0.01 per request
8K input tokens (diff + surrounding files) and a 1K-token review comment. PR-bot workloads.
Agent step
per 1,000 steps
$0.13
< $0.01 per request
12K input tokens (long-running tool history) and a 600-token tool-call decision. Cost per agent step.

Detalhes de preço

Preço recomendado de chutes · unsloth/Llama-3.2-3B-Instruct

$0.010
Entrada
$0.014
Saída
$0.005
Leitura de cache

Provedor mais barato: nvidia · Unknown entrada + Unknown saída

Disponível em 8 provedores

ProvedorID do modelo do provedorEntrada / 1MSaída / 1MContextoLançado
NanoGPT
nano-gpt
meta-llama/llama-3.2-3b-instruct$0.031$0.049131K2024-09-25
NovitaAI
novita-ai
meta-llama/llama-3.2-3b-instruct$0.030$0.05033K2024-09-18
Chutes
chutes
unsloth/Llama-3.2-3B-Instruct$0.010$0.01416K2025-02-12
Kilo Gateway
kilo
meta-llama/llama-3.2-3b-instruct$0.051$0.34080K2024-09-18
Cloudflare AI Gateway
cloudflare-ai-gateway
workers-ai/@cf/meta/llama-3.2-3b-instruct$0.051$0.340128K2025-04-03
Nvidia
nvidia
meta/llama-3.2-3b-instructUnknownUnknown33K2024-09-18
Inference
inference
meta/llama-3.2-3b-instruct$0.020$0.02016K2025-01-01
LLM Gateway
llmgateway
llama-3.2-3b-instruct$0.030$0.05033K2024-09-18

Inconsistências de dados entre provedores

  • context_window varies: 128000, 131072, 16000, 16384, 32768, 80000
  • release_date varies (span 197d): 2024-09-18, 2024-09-25, 2025-01-01, 2025-02-12, 2025-04-03
  • modalities varies across offerings

Os provedores reportam valores diferentes para este modelo. Os dados rápidos acima usam um provedor representativo; consulte a tabela para detalhes por provedor.

Frequently asked questions

How much does Llama 3.2 3b Instruct cost?

Llama 3.2 3b Instruct costs $0.010 per 1M input tokens and $0.014 per 1M output tokens, sourced from chutes. Cache reads, audio tokens and >200K-context tiers (where applicable) are listed in the Pricing detail block above.

What is the context window of Llama 3.2 3b Instruct?

Llama 3.2 3b Instruct has a context window of 131K tokens, with a max output of 8K tokens per reply. This is the total combined size of prompt + completion.

Does Llama 3.2 3b Instruct support tool calling?

No. Llama 3.2 3b Instruct does not support tool calling (function calling). If your workflow requires it, look at the /capabilities/tool-calling list for alternatives.

Does Llama 3.2 3b Instruct support structured output / JSON mode?

No. Llama 3.2 3b Instruct does not support structured output / JSON-schema-constrained decoding. If your workflow requires it, look at the /capabilities/structured-output list for alternatives.

Can Llama 3.2 3b Instruct accept image input?

No. Llama 3.2 3b Instruct only accepts text, pdf as input. If you need image input, see our /capabilities/vision list for current vision-capable models.

Is Llama 3.2 3b Instruct open-weight?

No. Llama 3.2 3b Instruct is a proprietary model — only Meta (and any approved hosting partners) can serve it. The pricing above reflects the cheapest API access.

What are the best alternatives to Llama 3.2 3b Instruct?

If Llama 3.2 3b Instruct doesn't fit, consider Meta-Llama-3.1-8B-Instruct, Llama-3.3-70B-Instruct, Llama 4 Maverick 17B 128E Instruct FP8. Each one targets the same use case — see the Related links below for direct head-to-head pages.

Where does this data come from?

All numbers come from the public models.dev API and are normalised into a single canonical model record. We re-pull daily and write any changes (price, context, capability) to the /changelog page.

Última atualização:

Data is sourced from models.dev and normalized for comparison. Prices and capabilities may change. Always verify critical production decisions with the provider's official documentation.