Llama 4 Maverick 17B 128E Instruct
meta/llama-4-maverick-17b-128e-instruct-maasОт Meta · семейство: llama · выпуск 2025-04-29 · дата знаний: 2024-08
Prices in USD per 1M tokens. Unknown means the provider does not publish per-token pricing.
Возможности
Model fit scores
0–100 · higher is betterThese scores reward declared capabilities, context size, price and provider availability — they are not benchmark results. Use them as a directional signal alongside your own evaluation.
Coding75
- Tool calling40/40
- Structured output20/20
- Reasoning0/10
- Context window (100K → 1M)14/20
- Provider availability1/10
Agents66
- Tool calling35/35
- Structured output25/25
- Reasoning0/15
- Output token limit5/15
- Provider availability1/10
JSON / structured output97
- Structured output / JSON mode50/50
- Tool calling20/20
- Temperature control10/10
- Price-friendly for high-volume17/20
Cost efficiency58
- Headline price (log-scaled)58/95
- Has prompt-cache pricing0/5
Long context76
- Context window (100K → 2M)66/90
- Has published price for full window10/10
Vision87
- Accepts image input50/50
- Context window (10K → 1M)26/30
- Has published price10/10
- Provider availability1/10
Production-readiness65
- Number of independent providers5/40
- Has published per-token price20/20
- Context window ≥ 8K15/15
- No data inconsistencies across providers10/10
- Official model (not derivative)15/15
Cost Efficiency Index
Open full calculator →Estimated cost using the recommended provider's headline rate. Each scenario fixes average input/output tokens — the assumptions are shown in the third column.
| Scenario | Cost | Assumption |
|---|---|---|
RAG answer per 1,000 RAG answers | $2.32 < $0.01 per request | 5K input tokens (query + 4 retrieved chunks of ~1K each) and a 500-token answer. Typical SaaS knowledge-base bot. |
Support ticket triage per 10,000 tickets | $4.65 < $0.01 per request | 1K input tokens (ticket body + system prompt) and a 100-token JSON classification reply. High-volume customer support. |
Data extraction per 1,000 documents | $1.28 < $0.01 per request | 2K input tokens (a single document page) and a 500-token JSON extraction. ETL / invoice / form pipelines. |
Code review per 1,000 PRs | $3.95 < $0.01 per request | 8K input tokens (diff + surrounding files) and a 1K-token review comment. PR-bot workloads. |
Agent step per 1,000 steps | $4.89 < $0.01 per request | 12K input tokens (long-running tool history) and a 600-token tool-call decision. Cost per agent step. |
Детализация цен
Рекомендованная цена от google-vertex · meta/llama-4-maverick-17b-128e-instruct-maas
Доступна у 1 провайдеров
| Провайдер | ID модели провайдера | Вход / 1M | Выход / 1M | Контекст | Выпуск |
|---|---|---|---|---|---|
| Vertex google-vertex | meta/llama-4-maverick-17b-128e-instruct-maas | $0.350 | $1.15 | 524K | 2025-04-29 |
Frequently asked questions
How much does Llama 4 Maverick 17B 128E Instruct cost?
Llama 4 Maverick 17B 128E Instruct costs $0.350 per 1M input tokens and $1.15 per 1M output tokens, sourced from google-vertex. Cache reads, audio tokens and >200K-context tiers (where applicable) are listed in the Pricing detail block above.
What is the context window of Llama 4 Maverick 17B 128E Instruct?
Llama 4 Maverick 17B 128E Instruct has a context window of 524K tokens, with a max output of 8K tokens per reply. This is the total combined size of prompt + completion.
Does Llama 4 Maverick 17B 128E Instruct support tool calling?
Yes. Llama 4 Maverick 17B 128E Instruct supports tool calling (function calling). This makes it suitable for production agent and automation workloads where the model has to invoke external functions reliably.
Does Llama 4 Maverick 17B 128E Instruct support structured output / JSON mode?
Yes. Llama 4 Maverick 17B 128E Instruct supports structured output / JSON-schema-constrained decoding. This makes it suitable for production agent and automation workloads where the model has to invoke external functions reliably.
Can Llama 4 Maverick 17B 128E Instruct accept image input?
Yes. Llama 4 Maverick 17B 128E Instruct accepts both text and image input. Vision pricing per image is usually billed on top of the regular token rate — check Meta's docs for the exact rule.
Is Llama 4 Maverick 17B 128E Instruct open-weight?
Yes. Llama 4 Maverick 17B 128E Instruct's weights are publicly available, so you can self-host or fine-tune. Note that open weights ≠ open source — the training data and code are typically not released.
What are the best alternatives to Llama 4 Maverick 17B 128E Instruct?
If Llama 4 Maverick 17B 128E Instruct doesn't fit, consider Meta-Llama-3.1-8B-Instruct, Llama-3.3-70B-Instruct, Llama 4 Maverick 17B 128E Instruct FP8. Each one targets the same use case — see the Related links below for direct head-to-head pages.
Where does this data come from?
All numbers come from the public models.dev API and are normalised into a single canonical model record. We re-pull daily and write any changes (price, context, capability) to the /changelog page.
Explore more
More Meta models
- Meta-Llama-3.1-8B-Instruct$0.02 in / $0.03 out
- Llama-3.3-70B-Instruct$0.05 in / $0.23 out
- Llama 4 Maverick 17B 128E Instruct FP8$0.14 in / $0.59 out
- Llama 4 Scout 17B 16E Instruct$0.08 in / $0.30 out
- Meta-Llama-3.1-70B-Instruct$0.40 in / $0.40 out
Capability lists this model is in
Последнее обновление:
Data is sourced from models.dev and normalized for comparison. Prices and capabilities may change. Always verify critical production decisions with the provider's official documentation.