Proveedor · 2026-05-12
huggingface
| Modelo | Entrada / 1M | Salida / 1M | Contexto | Proveedores | Etiquetas |
|---|---|---|---|---|---|
| MiMo-V2-Flash | $0.100 | $0.300 | 262K | 1 | tools · reasoning · open-weights |
Frequently asked questions
How many AI models does huggingface offer?
We track 1 canonical huggingface models. The list is recomputed daily from models.dev.
Which huggingface model is the cheapest?
MiMo-V2-Flash is currently the lowest-priced huggingface model, at $0.100 per 1M input tokens and $0.300 per 1M output tokens. For the full apples-to-apples list, see /pricing/cheapest-llm-api.
Which huggingface model has the largest context window?
MiMo-V2-Flash leads at 262K tokens. This is the total of prompt + completion.
Which huggingface models support tool calling?
Multiple huggingface models support tool calling, with MiMo-V2-Flash being a popular pick. The capability column in the table above marks every model with huggingface tool-calling support.
What are the best alternatives to huggingface?
Depends on the use case. For raw cost savings, look at /pricing/cheapest-llm-api. For agent-oriented workloads, /best/best-ai-model-for-agents. For long-document workflows, /best/best-long-context-llm.
How fresh is this huggingface pricing data?
Daily. Our pipeline pulls models.dev each morning and rebuilds these pages on data change, so list-price moves and new model releases land within roughly 24 hours.
Explore more
Última actualización:
Prices in USD per 1M tokens. Unknown means the provider does not publish per-token pricing.
Data is sourced from models.dev and normalized for comparison. Prices and capabilities may change. Always verify critical production decisions with the provider's official documentation.