AIAgentConfig
Configuration class for an AI agent that specifies the prompt, execution parameters, and behavior.
This class is responsible for defining the various settings and components required for an AI agent to operate. It includes the prompt configuration, iteration limits, and strategies for handling missing tools during execution.
Parameters
The initial prompt configuration for the agent, encapsulating messages, model, and parameters.
The model to use for the agent's prompt execution
The maximum number of iterations allowed for an agent during its execution to prevent infinite loops.
Strategy used to determine how tool calls, present in the prompt but lacking definitions, are handled during agent execution. This property provides a mechanism to convert or format missing tool calls and result messages when they occur, typically due to differences in tool sets between steps or subgraphs while the same history is reused. This ensures the prompt remains consistent and readable for the model, even with undefined tools.
Optional processor for the agent's responses. If specified, will modify the responses from the llm.
Optional serializer to (de)serialize tool arguments and results. Defaults to KotlinxSerializer
Constructors
Types
Properties
Specifies the maximum number of iterations an AI agent is allowed to execute.
Defines the strategy for converting tool calls in the prompt when some tool definitions are missing in the request. This is particularly relevant when managing multi-stage processing or subgraphs where tools used in different
Specifies the Large Language Model (LLM) configuration to be used by the AI agent.
Specifies the Large Language Model (LLM) used by the AI agent for generating responses.
The prompt configuration used in the AI agent settings.
Defines the Prompt to be used in the AI agent's configuration.
The response processor used for handling and modifying responses generated by LLM.
Serializer to (de)serialize tool arguments and results.
Functions
Executes the given suspending block of code on the LLM dispatcher (suitable for IO / LLM communication) derived from the provided executorService, or falls back to Dispatchers.IO if none is supplied.
Executes a given suspending block of code within a coroutine context on a strategy dispatcher that is determined by the provided executorService . If no executorService is supplied, it defaults to the AIAgentConfig.strategyExecutorService or falls back to Dispatchers.Default if none is configured.
Submits a block of code to the main dispatcher for execution.