LLMParamsUpdateContext
Represents a mutable context for updating the parameters of an LLM (Language Learning Model). The class is used internally to facilitate changes to various configurations, such as temperature, speculation, schema, and tool choice, before converting back to an immutable LLMParams
instance.
Properties
If true
, requests the model to add reasoning blocks to the response. Defaults to null
. When set to true
, responses may include detailed reasoning steps. When false
or null
, responses are typically shorter and faster.
A schema configuration that describes the structure of the output. This can include JSON-based schema definitions for fine-tuned output generation. This property is mutable for schema updates.
A speculative configuration string that influences model behavior, designed to enhance result speed and accuracy. This property is mutable for modifying the speculation setting.
The temperature value that adjusts randomness in the model's output. Higher values produce diverse results, while lower values yield deterministic responses. This property is mutable to allow updates during the context's lifecycle.
Hard cap for reasoning tokens. Ignored by models that don't support budgets. This can be used to limit the amount of tokens used for reasoning when includeThoughts
is enabled.
Defines the behavior of the LLM regarding tool usage, allowing choices such as automatic tool invocations or restricted tool interactions. This property is mutable to enable reconfiguration.