廠商 · 2026-05-12
DeepSeek
Open-weight Chinese lab — V3 and R1 series gained worldwide adoption for reasoning.
| 模型 | 輸入 / 1M | 輸出 / 1M | 上下文 | 服務商 | 標籤 |
|---|---|---|---|---|---|
| DeepSeek R1 Distill Llama 70B | $0.027 | $0.109 | 8K | 5 | json · reasoning · open-weights |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-14B | $0.100 | $0.100 | 131K | 4 | tools · json · reasoning |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | $0.180 | $0.180 | 131K | 6 | tools · json · reasoning |
| DeepSeek V4 Flash | $0.140 | $0.280 | 1M | 15 | tools · json · reasoning · open-weights |
| DeepSeek Chat | $0.140 | $0.280 | 1M | 5 | tools · open-weights |
| DeepSeek Reasoner | $0.140 | $0.280 | 1M | 4 | tools · reasoning · open-weights |
| DeepSeek V3.2 Exp | $0.220 | $0.330 | 164K | 9 | tools · reasoning |
| DeepSeek-V3.2 | $0.260 | $0.380 | 164K | 31 | tools · reasoning · open-weights |
| DeepSeek-V3.2-Speciale | $0.270 | $0.410 | 128K | 5 | reasoning · open-weights |
| DeepSeek V3.2 Thinking | $0.280 | $0.420 | 128K | 3 | tools · reasoning |
| DeepSeek V3.1 Nex N1 | $0.280 | $0.420 | 128K | 3 | — |
| DeepSeek-V3.1 | $0.200 | $0.700 | 131K | 18 | tools · reasoning · open-weights |
| DeepSeek V3.1 Terminus | $0.250 | $0.700 | 131K | 14 | tools · json · reasoning · open-weights |
| DeepSeek-V3-0324 | $0.250 | $0.700 | 131K | 13 | tools · open-weights |
| DeepSeek-V3.1 | $0.250 | $1.00 | 164K | 13 | tools · json · reasoning · open-weights |
| DeepSeek-R1-0528 | $0.400 | $1.70 | 164K | 19 | tools · reasoning · open-weights |
| DeepSeek-R1 | $0.400 | $1.70 | 128K | 18 | tools · reasoning |
| DeepSeek V4 Pro | $1.74 | $3.48 | 1M | 24 | tools · json · reasoning · open-weights |
Frequently asked questions
How many AI models does DeepSeek offer?
We track 18 canonical DeepSeek models. The list is recomputed daily from models.dev.
Which DeepSeek model is the cheapest?
DeepSeek R1 Distill Llama 70B is currently the lowest-priced DeepSeek model, at $0.027 per 1M input tokens and $0.109 per 1M output tokens. For the full apples-to-apples list, see /pricing/cheapest-llm-api.
Which DeepSeek model has the largest context window?
DeepSeek V4 Flash leads at 1M tokens. This is the total of prompt + completion.
Which DeepSeek models support tool calling?
Multiple DeepSeek models support tool calling, with deepseek-ai/DeepSeek-R1-Distill-Qwen-14B being a popular pick. The capability column in the table above marks every model with DeepSeek tool-calling support.
What are the best alternatives to DeepSeek?
Depends on the use case. For raw cost savings, look at /pricing/cheapest-llm-api. For agent-oriented workloads, /best/best-ai-model-for-agents. For long-document workflows, /best/best-long-context-llm.
How fresh is this DeepSeek pricing data?
Daily. Our pipeline pulls models.dev each morning and rebuilds these pages on data change, so list-price moves and new model releases land within roughly 24 hours.
Explore more
Top DeepSeek models
- DeepSeek-V3.2$0.26 in / $0.38 out
- DeepSeek V4 Pro$1.74 in / $3.48 out
- DeepSeek-R1-0528$0.40 in / $1.70 out
- DeepSeek-V3.1$0.20 in / $0.70 out
- DeepSeek-R1$0.40 in / $1.70 out
Browse by use case
Browse by capability
最近更新:
Prices in USD per 1M tokens. Unknown means the provider does not publish per-token pricing.
Data is sourced from models.dev and normalized for comparison. Prices and capabilities may change. Always verify critical production decisions with the provider's official documentation.