Chat

object Chat(source)

Object that provides pre-configured instances of advanced GPT models for different use cases. These instances represent versatile and high-performance large language models capable of handling various tasks like text completion, image input processing, structured outputs, and interfacing with tools.

Properties

Link copied to clipboard

GPT-4.1 is a model for complex tasks. It is well suited for problem solving across domains.

Link copied to clipboard

GPT-4.1 mini provides a balance between intelligence, speed, and cost that makes it an attractive model for many use cases.

Link copied to clipboard

GPT-4.1-nano is the smallest and most affordable model in the GPT-4.1 family. It's designed for tasks that require basic capabilities at the lowest possible cost.

Link copied to clipboard

GGPT-4o (“o” for “omni”) is a versatile, high-intelligence flagship model. It accepts both text and image inputs, and produces text outputs (including Structured Outputs). It is the best model for most tasks, and is currently the most capable model outside of the o-series models.

Link copied to clipboard

GPT-4o mini is a smaller, more affordable version of GPT-4o that maintains high quality while being more cost-effective. It's designed for tasks that don't require the full capabilities of GPT-4o.

Link copied to clipboard

GPT-5 is a flagship model for coding, reasoning, and agentic tasks across domains.

Link copied to clipboard

GPT-5.1 is a flagship model for coding and agentic tasks with configurable reasoning and non-reasoning effort.

Link copied to clipboard

GPT-5.1-Codex is a version of GPT-5 optimized for agentic coding tasks in Codex or similar environments. It's available in the Responses API only, and the underlying model snapshot will be regularly updated.

Link copied to clipboard

GPT-5.2 is OpenAI's flagship model for coding and agentic tasks across industries. Supports both reasoning and chat completions endpoints.

Link copied to clipboard

GPT-5.2 pro is available in the Responses API only to enable support for multi-turn model interactions before responding to API requests, and other advanced API features in the future. Supports reasoning.effort: medium, high, xhigh.

Link copied to clipboard

GPT-5-Codex is a version of GPT-5 optimized for agentic coding tasks in agents. It's available in the Responses API only, and the underlying model snapshot will be regularly updated.

Link copied to clipboard

GPT-5 mini is a faster, cost-efficient version of GPT-5 for well-defined tasks.

Link copied to clipboard

GPT-5 nano is the fastest, most cost-efficient version of GPT-5. Great for summarization and classification tasks.

Link copied to clipboard

GPT-5 pro uses more compute to think harder and provide consistently better answers. GPT-5 pro is available in the Responses API only to enable support for multi-turn model interactions before responding to API requests, and other advanced API features in the future. As the most advanced reasoning model, GPT-5 pro defaults to (and only supports) reasoning.effort: high. GPT-5 pro does not support code interpreter.

Link copied to clipboard
val O1: LLModel

The o1 series of models are trained with reinforcement learning to perform complex reasoning. o1 models think before they answer, producing a long internal chain of thought before responding to the user.

Link copied to clipboard
val O3: LLModel

o3 is a well-rounded and powerful model across domains. It is capable of math, science, coding, and visual reasoning tasks. It also excels at technical writing and instruction-following. Use it to think through multi-step problems that involve analysis across text, code, and images.

Link copied to clipboard

o3-mini is a smaller, more affordable version of o3. It's a small reasoning model, providing high intelligence at the same cost and latency targets of o1-mini. o3-mini supports key developer features, like Structured Outputs, function calling, and Batch API.

Link copied to clipboard

o4-mini is a smaller, more affordable version of o4 that maintains high quality while being more cost-effective. It's optimized for fast, effective reasoning with exceptionally efficient performance in coding and visual tasks.