LLMParams
Represents configuration parameters for controlling the behavior of a language model.
Constructors
Types
Properties
If true
, requests the model to add reasoning blocks to the response. Defaults to null
. When set to true
, responses may include detailed reasoning steps. When false
or null
, responses are typically shorter and faster.
Specifies the number of alternative completions to generate.
Defines the structure for the model's structured response format.
Reserved for speculative proposition of how result would look like, supported only by a number of models, but may greatly improve speed and accuracy of result. For example, in OpenAI that feature is called PredictedOutput
A parameter to control the randomness in the output. Higher values encourage more diverse results, while lower values produce deterministically focused outputs. The value is optional and defaults to null.
Hard cap for reasoning tokens. Ignored by models that don't support budgets. This can be used to limit the amount of tokens used for reasoning when includeThoughts
is enabled.
Used to switch tool calling behavior of LLM.
Functions
Component functions for destructuring declarations
Creates a copy of this instance with the ability to modify any of its properties.