OpenAIChatParams
OpenAI chat/completions parameters layered on top of LLMParams.
These options mirror the fields commonly used with OpenAI’s Chat Completions / Responses APIs and add OpenAI-specific controls (audio, logprobs, reasoning effort, response formatting, service tiers, etc.). All parameters are optional; when unset, provider/model defaults apply.
Constructors
Properties
Audio output configuration when using audio-capable models.
Number in -2.0, 2.0—penalizes frequent tokens to reduce repetition.
Allow multiple tool calls in parallel.
Number in -2.0, 2.0—encourages an introduction of new tokens/topics.
Stable cache key for prompt caching (non-blank when provided).
Constrains reasoning effort (e.g., MINIMAL/LOW/MEDIUM/HIGH).
Stable app-scoped user ID for policy enforcement (non-blank when provided).
Processing tier selection for cost/latency trade-offs.
Number of top alternatives per position (0–20). Requires logprobs = true.
Nucleus sampling in (0.0, 1.0]; use instead of temperature.
Configure web search tool usage (if supported).
Functions
Creates a copy of this instance with the ability to modify any of its properties.