Inteligencia de modelos de IA

Proveedor · 2026-05-12

dinference

1 modelos canónicos1 entradas en total (incluidos derivados)
ModeloEntrada / 1MSalida / 1MContextoProveedoresEtiquetas
GPT OSS 120B$0.068$0.270131K1tools · open-weights

Frequently asked questions

How many AI models does dinference offer?

We track 1 canonical dinference models. The list is recomputed daily from models.dev.

Which dinference model is the cheapest?

GPT OSS 120B is currently the lowest-priced dinference model, at $0.068 per 1M input tokens and $0.270 per 1M output tokens. For the full apples-to-apples list, see /pricing/cheapest-llm-api.

Which dinference model has the largest context window?

GPT OSS 120B leads at 131K tokens. This is the total of prompt + completion.

Which dinference models support tool calling?

Multiple dinference models support tool calling, with GPT OSS 120B being a popular pick. The capability column in the table above marks every model with dinference tool-calling support.

What are the best alternatives to dinference?

Depends on the use case. For raw cost savings, look at /pricing/cheapest-llm-api. For agent-oriented workloads, /best/best-ai-model-for-agents. For long-document workflows, /best/best-long-context-llm.

How fresh is this dinference pricing data?

Daily. Our pipeline pulls models.dev each morning and rebuilds these pages on data change, so list-price moves and new model releases land within roughly 24 hours.

Última actualización:

Prices in USD per 1M tokens. Unknown means the provider does not publish per-token pricing.

Data is sourced from models.dev and normalized for comparison. Prices and capabilities may change. Always verify critical production decisions with the provider's official documentation.