Интерфейс моделей ИИ

Llama 3.2 1B Instruct

meta/llama-3-2-1b-instruct

От Meta · семейство: llama · выпуск 2024-09-18 · дата знаний: 2023-12

$0.010
Вход / 1M токенов
$0.010
Выход / 1M токенов
16K
Окно контекста
8K
Макс. вывод

Prices in USD per 1M tokens. Unknown means the provider does not publish per-token pricing.

Возможности

Tool callingРассуждение? Структурированный выводВложенияОткрытые весаУправление температурой
Модальности: вход text · выход text

Model fit scores

0–100 · higher is better

These scores reward declared capabilities, context size, price and provider availability — they are not benchmark results. Use them as a directional signal alongside your own evaluation.

Coding5
  • Tool calling0/40
  • Structured output0/20
  • Reasoning0/10
  • Context window (100K → 1M)0/20
  • Provider availability5/10
Agents10
  • Tool calling0/35
  • Structured output0/25
  • Reasoning0/15
  • Output token limit5/15
  • Provider availability5/10
JSON / structured output30
  • Structured output / JSON mode0/50
  • Tool calling0/20
  • Temperature control10/10
  • Price-friendly for high-volume20/20
Cost efficiency95
  • Headline price (log-scaled)95/95
  • Has prompt-cache pricing0/5
Long context0
  • Context ≥ 100K0/100
Production-readiness74
  • Number of independent providers25/40
  • Has published per-token price20/20
  • Context window ≥ 8K8/15
  • No data inconsistencies across providers6/10
  • Official model (not derivative)15/15

Cost Efficiency Index

Open full calculator →

Estimated cost using the recommended provider's headline rate. Each scenario fixes average input/output tokens — the assumptions are shown in the third column.

ScenarioCostAssumption
RAG answer
per 1,000 RAG answers
$0.06
< $0.01 per request
5K input tokens (query + 4 retrieved chunks of ~1K each) and a 500-token answer. Typical SaaS knowledge-base bot.
Support ticket triage
per 10,000 tickets
$0.11
< $0.01 per request
1K input tokens (ticket body + system prompt) and a 100-token JSON classification reply. High-volume customer support.
Data extraction
per 1,000 documents
$0.03
< $0.01 per request
2K input tokens (a single document page) and a 500-token JSON extraction. ETL / invoice / form pipelines.
Code review
per 1,000 PRs
$0.09
< $0.01 per request
8K input tokens (diff + surrounding files) and a 1K-token review comment. PR-bot workloads.
Agent step
per 1,000 steps
$0.13
< $0.01 per request
12K input tokens (long-running tool history) and a 600-token tool-call decision. Cost per agent step.

Детализация цен

Рекомендованная цена от inference · meta/llama-3.2-1b-instruct

$0.010
Вход
$0.010
Выход

Самый дешёвый провайдер: nvidia · Unknown вход + Unknown выход

Доступна у 5 провайдеров

ПровайдерID модели провайдераВход / 1MВыход / 1MКонтекстВыпуск
Chutes
chutes
unsloth/Llama-3.2-1B-Instruct$0.010$0.01116K2026-01-27
Kilo Gateway
kilo
meta-llama/llama-3.2-1b-instruct$0.027$0.20060K2024-09-18
Cloudflare AI Gateway
cloudflare-ai-gateway
workers-ai/@cf/meta/llama-3.2-1b-instruct$0.027$0.200128K2025-04-03
Nvidia
nvidia
meta/llama-3.2-1b-instructUnknownUnknown128K2024-09-18
Inference
inference
meta/llama-3.2-1b-instruct$0.010$0.01016K2025-01-01

Расхождения данных между провайдерами

  • context_window varies: 128000, 16000, 16384, 60000
  • release_date varies (span 496d): 2024-09-18, 2025-01-01, 2025-04-03, 2026-01-27

Провайдеры сообщают разные значения для этой модели. Сводка выше использует репрезентативного провайдера; детали — в таблице.

Frequently asked questions

How much does Llama 3.2 1B Instruct cost?

Llama 3.2 1B Instruct costs $0.010 per 1M input tokens and $0.010 per 1M output tokens, sourced from inference. Cache reads, audio tokens and >200K-context tiers (where applicable) are listed in the Pricing detail block above.

What is the context window of Llama 3.2 1B Instruct?

Llama 3.2 1B Instruct has a context window of 16K tokens, with a max output of 8K tokens per reply. This is the total combined size of prompt + completion.

Does Llama 3.2 1B Instruct support tool calling?

No. Llama 3.2 1B Instruct does not support tool calling (function calling). If your workflow requires it, look at the /capabilities/tool-calling list for alternatives.

Does Llama 3.2 1B Instruct support structured output / JSON mode?

Support for structured output / JSON-schema-constrained decoding is not reported for Llama 3.2 1B Instruct in our data source. Verify with Meta's official documentation before relying on it in production.

Can Llama 3.2 1B Instruct accept image input?

No. Llama 3.2 1B Instruct only accepts text as input. If you need image input, see our /capabilities/vision list for current vision-capable models.

Is Llama 3.2 1B Instruct open-weight?

Yes. Llama 3.2 1B Instruct's weights are publicly available, so you can self-host or fine-tune. Note that open weights ≠ open source — the training data and code are typically not released.

What are the best alternatives to Llama 3.2 1B Instruct?

If Llama 3.2 1B Instruct doesn't fit, consider Meta-Llama-3.1-8B-Instruct, Llama-3.3-70B-Instruct, Llama 4 Maverick 17B 128E Instruct FP8. Each one targets the same use case — see the Related links below for direct head-to-head pages.

Where does this data come from?

All numbers come from the public models.dev API and are normalised into a single canonical model record. We re-pull daily and write any changes (price, context, capability) to the /changelog page.

More Meta models

Capability lists this model is in

Последнее обновление:

Data is sourced from models.dev and normalized for comparison. Prices and capabilities may change. Always verify critical production decisions with the provider's official documentation.