AIAgentFunctionalContextBase

Represents the execution context for an AI agent operating in a loop. It provides access to critical parts such as the environment, configuration, large language model (LLM) context, state management, and storage. Additionally, it enables the agent to store, retrieve, and manage context-specific data during its execution lifecycle.

Inheritors

Properties

Link copied to clipboard
expect open override val agentId: String

A unique identifier for the agent, differentiating it from other agents in the system.

open override val agentId: String

A unique identifier representing the current agent instance within the context.

open override val agentId: String

A unique identifier representing the current agent instance within the context.

Link copied to clipboard
expect open override val agentInput: Any?

The input data passed to the agent, which can be of any type, depending on the agent's context.

open override val agentInput: Any?

Represents the input provided to the agent's execution.

open override val agentInput: Any?

Represents the input provided to the agent's execution.

Link copied to clipboard
expect open override val config: AIAgentConfig

The configuration settings for the agent, including its prompt and model details, as well as operational constraints like iteration limits.

open override val config: AIAgentConfig

Represents the configuration for an AI agent.

open override val config: AIAgentConfig

Represents the configuration for an AI agent.

Link copied to clipboard
expect open override val environment: AIAgentEnvironment

The environment interface allowing the agent to interact with the external world, including executing tools and reporting problems.

open override val environment: AIAgentEnvironment

Represents the environment in which the agent operates.

open override val environment: AIAgentEnvironment

Represents the environment in which the agent operates.

Link copied to clipboard
expect open override var executionInfo: AgentExecutionInfo

Represents the observability data associated with the AI Agent context.

Represents the observability data associated with the AI Agent context.

Represents the observability data associated with the AI Agent context.

Link copied to clipboard
expect open override val llm: AIAgentLLMContext

The context for interacting with the large language model used by the agent, enabling message history retrieval and processing.

open override val llm: AIAgentLLMContext

Represents the AI agent's LLM context, providing mechanisms for managing tools, prompts, and interaction with the execution environment. It ensures thread safety during concurrent read and write operations through the use of sessions.

open override val llm: AIAgentLLMContext

Represents the AI agent's LLM context, providing mechanisms for managing tools, prompts, and interaction with the execution environment. It ensures thread safety during concurrent read and write operations through the use of sessions.

Link copied to clipboard
expect open override val parentContext: AIAgentContext?

Represents the parent context of the AI Agent.

open override val parentContext: AIAgentContext?

Represents the parent context of the AI Agent.

open override val parentContext: AIAgentContext?

Represents the parent context of the AI Agent.

Link copied to clipboard
expect open override val pipeline: Pipeline

Represents the pipeline associated with the AI agent.

open override val pipeline: Pipeline

Represents the pipeline associated with the AI agent.

open override val pipeline: Pipeline

Represents the pipeline associated with the AI agent.

Link copied to clipboard
expect open override val runId: String

A unique identifier for the current run or instance of the agent's operation.

open override val runId: String

A unique identifier for the current session associated with the AI agent context. Used to track and differentiate sessions within the execution of the agent pipeline.

open override val runId: String

A unique identifier for the current session associated with the AI agent context. Used to track and differentiate sessions within the execution of the agent pipeline.

Link copied to clipboard
expect open override val stateManager: AIAgentStateManager

The state management component responsible for tracking and updating the agent's state during its execution.

Manages and tracks the state of aт AI agent within the context of its execution.

Manages and tracks the state of aт AI agent within the context of its execution.

Link copied to clipboard
expect open override val storage: AIAgentStorage

A storage interface providing persistent storage capabilities for the agent's data.

open override val storage: AIAgentStorage

Concurrent-safe key-value storage for an agent, used to manage and persist data within the context of the AI agent stage execution. The storage property provides a thread-safe mechanism for sharing and storing data specific to the agent's operation.

open override val storage: AIAgentStorage

Concurrent-safe key-value storage for an agent, used to manage and persist data within the context of the AI agent stage execution. The storage property provides a thread-safe mechanism for sharing and storing data specific to the agent's operation.

Link copied to clipboard
expect open override val strategyName: String

The name of the agent's strategic approach or operational method, determining its behavior during execution.

open override val strategyName: String

Represents the name of the strategy being used in the current AI agent context.

open override val strategyName: String

Represents the name of the strategy being used in the current AI agent context.

Functions

Link copied to clipboard

Retrieves the unique identifier for the agent.

Link copied to clipboard
fun agentInput(): Any?

Retrieves the current agent input.

Link copied to clipboard
inline fun <T> AIAgentContext.agentInput(): T

Utility function to get AIAgentContext.agentInput and try to cast it to some expected type.

Link copied to clipboard

Casts the current instance of a Message.Response to a Message.Assistant. This function should only be used when it is guaranteed that the instance is of type Message.Assistant, as it will throw an exception if the type does not match.

Casts the current instance of a Message.Response to a Message.Assistant. This function should only be used when it is guaranteed that the instance is of type Message.Assistant, as it will throw an exception if the type does not match.

Casts the current instance of a Message.Response to a Message.Assistant. This function should only be used when it is guaranteed that the instance is of type Message.Assistant, as it will throw an exception if the type does not match.

Link copied to clipboard

Attempts to cast a Message.Response instance to a Message.Assistant type.

Attempts to cast a Message.Response instance to a Message.Assistant type.

Attempts to cast a Message.Response instance to a Message.Assistant type.

Link copied to clipboard
expect open suspend override fun compressHistory(strategy: HistoryCompressionStrategy, preserveMemory: Boolean)

Compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.

fun <ToolArg, ToolResult> compressHistory(strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, preserveMemory: Boolean = true, executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null)

Compresses the historical data of a tool's operations using the specified compression strategy. This method is designed for optimizing memory usage by reducing the size of stored historical data.

open suspend override fun compressHistory(strategy: HistoryCompressionStrategy, preserveMemory: Boolean)

Compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.

open suspend override fun compressHistory(strategy: HistoryCompressionStrategy, preserveMemory: Boolean)

Compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.

Link copied to clipboard

Provides the configuration for an AI agent.

Link copied to clipboard

Checks if the list of Message.Response contains any instances of Message.Tool.Call.

Checks if the list of Message.Response contains any instances of Message.Tool.Call.

Checks if the list of Message.Response contains any instances of Message.Tool.Call.

Link copied to clipboard

Extension function to access the Debugger feature from an agent context.

Link copied to clipboard

Retrieves the current AI agent environment.

Link copied to clipboard
expect open suspend override fun executeMultipleTools(toolCalls: List<Message.Tool.Call>, parallelTools: Boolean): List<ReceivedToolResult>

Executes multiple tool calls and returns their results. These calls can optionally be executed in parallel.

fun executeMultipleTools(toolCalls: List<Message.Tool.Call>, parallelTools: Boolean, executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): List<ReceivedToolResult>

Executes multiple tool calls either sequentially or in parallel based on the provided configurations.

open suspend override fun executeMultipleTools(toolCalls: List<Message.Tool.Call>, parallelTools: Boolean): List<ReceivedToolResult>

Executes multiple tool calls and returns their results. These calls can optionally be executed in parallel.

open suspend override fun executeMultipleTools(toolCalls: List<Message.Tool.Call>, parallelTools: Boolean): List<ReceivedToolResult>

Executes multiple tool calls and returns their results. These calls can optionally be executed in parallel.

Link copied to clipboard
expect open suspend override fun <ToolArg, TResult> executeSingleTool(tool: Tool<ToolArg, TResult>, toolArgs: ToolArg, doUpdatePrompt: Boolean): SafeTool.Result<TResult>

Calls a specific tool directly using the provided arguments.

fun <ToolArg, ToolResult> executeSingleTool(tool: Tool<ToolArg, ToolResult>, toolArgs: ToolArg, doUpdatePrompt: Boolean, executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): SafeTool.Result<ToolResult>

Executes a single tool with the specified arguments.

open suspend override fun <ToolArg, TResult> executeSingleTool(tool: Tool<ToolArg, TResult>, toolArgs: ToolArg, doUpdatePrompt: Boolean): SafeTool.Result<TResult>

Calls a specific tool directly using the provided arguments.

open suspend override fun <ToolArg, TResult> executeSingleTool(tool: Tool<ToolArg, TResult>, toolArgs: ToolArg, doUpdatePrompt: Boolean): SafeTool.Result<TResult>

Calls a specific tool directly using the provided arguments.

Link copied to clipboard
expect open suspend override fun executeTool(toolCall: Message.Tool.Call): ReceivedToolResult

Executes a tool call and returns the result.

fun executeTool(toolCall: Message.Tool.Call, executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): ReceivedToolResult

Executes the specified tool call using an optional executor service.

open suspend override fun executeTool(toolCall: Message.Tool.Call): ReceivedToolResult

Executes a tool call and returns the result.

open suspend override fun executeTool(toolCall: Message.Tool.Call): ReceivedToolResult

Executes a tool call and returns the result.

Link copied to clipboard

Retrieves the execution information of the agent.

Link copied to clipboard
expect open override fun extractToolCalls(response: List<Message.Response>): List<Message.Tool.Call>

Extracts a list of tool call messages from a given list of response messages.

Extracts a list of tool call messages from a given list of response messages.

Extracts a list of tool call messages from a given list of response messages.

Link copied to clipboard

Retrieves a feature from the AIAgentContext.pipeline associated with this context using the specified key.

Link copied to clipboard

Retrieves a feature from the AIAgentContext.pipeline associated with this context using the specified key or throws an exception if it is not available.

Link copied to clipboard
expect open override fun <T> get(key: AIAgentStorageKey<*>): T?

Retrieves data from the agent's storage using the specified key.

open override fun <T> get(key: AIAgentStorageKey<*>): T?

Retrieves data from the agent's storage using the specified key.

open override fun <T> get(key: AIAgentStorageKey<*>): T?

Retrieves data from the agent's storage using the specified key.

Link copied to clipboard

Retrieves the agent-specific context data associated with the current instance.

Link copied to clipboard
expect open suspend override fun getHistory(): List<Message>

Retrieves the history of messages exchanged during the agent's execution.

fun getHistory(executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): List<Message>

open suspend override fun getHistory(): List<Message>

Retrieves the history of messages exchanged during the agent's execution.

open suspend override fun getHistory(): List<Message>

Retrieves the history of messages exchanged during the agent's execution.

Link copied to clipboard
expect open suspend override fun latestTokenUsage(): Int

Retrieves the latest token usage from the prompt within the LLM session.

fun latestTokenUsage(executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): Int

Retrieves the most recent token usage count synchronously.

open suspend override fun latestTokenUsage(): Int

Retrieves the latest token usage from the prompt within the LLM session.

open suspend override fun latestTokenUsage(): Int

Retrieves the latest token usage from the prompt within the LLM session.

Link copied to clipboard

Returns the current instance of AIAgentLLMContext.

Link copied to clipboard
expect open override fun onAssistantMessage(response: Message.Response, action: (Message.Assistant) -> Unit)

Executes the provided action if the given response is of type Message.Assistant.

open override fun onAssistantMessage(response: Message.Response, action: (Message.Assistant) -> Unit)

Executes the provided action if the given response is of type Message.Assistant.

open override fun onAssistantMessage(response: Message.Response, action: (Message.Assistant) -> Unit)

Executes the provided action if the given response is of type Message.Assistant.

Link copied to clipboard
expect open override fun onMultipleAssistantMessages(response: List<Message.Response>, action: (List<Message.Assistant>) -> Unit)

Filters the provided list of response messages to include only assistant messages and, if the filtered list is not empty, performs the specified action with the filtered list.

open override fun onMultipleAssistantMessages(response: List<Message.Response>, action: (List<Message.Assistant>) -> Unit)

Filters the provided list of response messages to include only assistant messages and, if the filtered list is not empty, performs the specified action with the filtered list.

open override fun onMultipleAssistantMessages(response: List<Message.Response>, action: (List<Message.Assistant>) -> Unit)

Filters the provided list of response messages to include only assistant messages and, if the filtered list is not empty, performs the specified action with the filtered list.

Link copied to clipboard
expect open override fun onMultipleToolCalls(response: List<Message.Response>, action: (List<Message.Tool.Call>) -> Unit)

Invokes the provided action when multiple tool call messages are found within a given list of response messages. Filters the list of responses to include only instances of Message.Tool.Call and executes the action on the filtered list if it is not empty.

open override fun onMultipleToolCalls(response: List<Message.Response>, action: (List<Message.Tool.Call>) -> Unit)

Invokes the provided action when multiple tool call messages are found within a given list of response messages. Filters the list of responses to include only instances of Message.Tool.Call and executes the action on the filtered list if it is not empty.

open override fun onMultipleToolCalls(response: List<Message.Response>, action: (List<Message.Tool.Call>) -> Unit)

Invokes the provided action when multiple tool call messages are found within a given list of response messages. Filters the list of responses to include only instances of Message.Tool.Call and executes the action on the filtered list if it is not empty.

Link copied to clipboard

Builds and returns an instance of the AIAgentFunctionalPipeline.

Link copied to clipboard
expect open override fun remove(key: AIAgentStorageKey<*>): Boolean

Removes a feature or data associated with the specified key from the agent's storage.

open override fun remove(key: AIAgentStorageKey<*>): Boolean

Removes a feature or data associated with the specified key from the agent's storage.

open override fun remove(key: AIAgentStorageKey<*>): Boolean

Removes a feature or data associated with the specified key from the agent's storage.

Link copied to clipboard

Removes the agent-specific context data associated with the current context.

Link copied to clipboard
expect open suspend override fun requestLLM(message: String, allowToolCalls: Boolean): Message.Response

Sends a message to a Large Language Model (LLM) and optionally allows the use of tools during the LLM interaction. The message becomes part of the current prompt, and the LLM's response is processed accordingly, either with or without tool integrations based on the provided parameters.

actual open suspend override fun requestLLM(message: String, allowToolCalls: Boolean): Message.Response

Sends a message to a Large Language Model (LLM) and optionally allows the use of tools during the LLM interaction. The message becomes part of the current prompt, and the LLM's response is processed accordingly, either with or without tool integrations based on the provided parameters.

fun requestLLM(message: String, allowToolCalls: Boolean = true, executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): Message.Response

Sends a request to the Large Language Model (LLM) and retrieves its response.

open suspend override fun requestLLM(message: String, allowToolCalls: Boolean): Message.Response

Sends a message to a Large Language Model (LLM) and optionally allows the use of tools during the LLM interaction. The message becomes part of the current prompt, and the LLM's response is processed accordingly, either with or without tool integrations based on the provided parameters.

Link copied to clipboard
expect open suspend override fun requestLLMForceOneTool(message: String, tool: ToolDescriptor): Message.Response
expect open suspend override fun requestLLMForceOneTool(message: String, tool: Tool<*, *>): Message.Response

Sends a message to a Large Language Model (LLM) and forces it to use a specific tool. The message becomes part of the current prompt, and the LLM is instructed to use only the specified tool.

fun requestLLMForceOneTool(message: String, tool: ToolDescriptor, executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): Message.Response

Sends a request to the LLM (Large Language Model) system using a specified tool, ensuring the use of exactly one tool in the response generation process.

fun requestLLMForceOneTool(message: String, tool: Tool<*, *>, executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): Message.Response

Sends a request to the LLM (Large Language Model) forcing the use of a specified tool and returns the response.

open suspend override fun requestLLMForceOneTool(message: String, tool: ToolDescriptor): Message.Response
open suspend override fun requestLLMForceOneTool(message: String, tool: Tool<*, *>): Message.Response

Sends a message to a Large Language Model (LLM) and forces it to use a specific tool. The message becomes part of the current prompt, and the LLM is instructed to use only the specified tool.

open suspend override fun requestLLMForceOneTool(message: String, tool: ToolDescriptor): Message.Response
open suspend override fun requestLLMForceOneTool(message: String, tool: Tool<*, *>): Message.Response

Sends a message to a Large Language Model (LLM) and forces it to use a specific tool. The message becomes part of the current prompt, and the LLM is instructed to use only the specified tool.

Link copied to clipboard
expect open suspend override fun requestLLMMultiple(message: String): List<Message.Response>

Sends a message to a Large Language Model (LLM) and gets multiple LLM responses with tool calls enabled. The message becomes part of the current prompt, and multiple responses from the LLM are collected.

fun requestLLMMultiple(message: String, executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): List<Message.Response>

Sends a request to the Large Language Model (LLM) and retrieves multiple responses.

open suspend override fun requestLLMMultiple(message: String): List<Message.Response>

Sends a message to a Large Language Model (LLM) and gets multiple LLM responses with tool calls enabled. The message becomes part of the current prompt, and multiple responses from the LLM are collected.

open suspend override fun requestLLMMultiple(message: String): List<Message.Response>

Sends a message to a Large Language Model (LLM) and gets multiple LLM responses with tool calls enabled. The message becomes part of the current prompt, and multiple responses from the LLM are collected.

Link copied to clipboard
expect open suspend override fun requestLLMOnlyCallingTools(message: String): Message.Response

Sends a message to a Large Language Model (LLM) that will only call tools without generating text responses. The message becomes part of the current prompt, and the LLM is instructed to only use tools.

fun requestLLMOnlyCallingTools(message: String, executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): Message.Response

Executes a request to the LLM, restricting the process to only calling external tools as needed.

open suspend override fun requestLLMOnlyCallingTools(message: String): Message.Response

Sends a message to a Large Language Model (LLM) that will only call tools without generating text responses. The message becomes part of the current prompt, and the LLM is instructed to only use tools.

open suspend override fun requestLLMOnlyCallingTools(message: String): Message.Response

Sends a message to a Large Language Model (LLM) that will only call tools without generating text responses. The message becomes part of the current prompt, and the LLM is instructed to only use tools.

Link copied to clipboard
expect open suspend override fun requestLLMStreaming(message: String, structureDefinition: StructureDefinition?): Flow<StreamFrame>

Sends a message to a Large Language Model (LLM) and streams the LLM response. The message becomes part of the current prompt, and the LLM's response is streamed as it's generated.

fun requestLLMStreaming(message: String, structureDefinition: StructureDefinition?, executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): ERROR CLASS: Symbol not found for Flow.Publisher<ai/koog/prompt/streaming/StreamFrame>

Sends a request to the Language Learning Model (LLM) for streaming data.

open suspend override fun requestLLMStreaming(message: String, structureDefinition: StructureDefinition?): Flow<StreamFrame>

Sends a message to a Large Language Model (LLM) and streams the LLM response. The message becomes part of the current prompt, and the LLM's response is streamed as it's generated.

open suspend override fun requestLLMStreaming(message: String, structureDefinition: StructureDefinition?): Flow<StreamFrame>

Sends a message to a Large Language Model (LLM) and streams the LLM response. The message becomes part of the current prompt, and the LLM's response is streamed as it's generated.

Link copied to clipboard
expect inline suspend fun <T> requestLLMStructured(message: String, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>
actual inline suspend fun <T> requestLLMStructured(message: String, examples: List<T>, fixingParser: StructureFixingParser?): Result<StructuredResponse<T>>

suspend fun <T : Any> requestLLMStructured(message: String, clazz: KClass<T>, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>

Sends a structured request to the Large Language Model (LLM) and processes the response.

actual inline suspend fun <T> requestLLMStructured(message: String, examples: List<T>, fixingParser: StructureFixingParser?): Result<StructuredResponse<T>>
Link copied to clipboard

Provides the root context of the current agent. If the root context is not defined, this function defaults to returning the current instance.

Link copied to clipboard
fun runId(): String

Retrieves the current run identifier.

Link copied to clipboard
expect open suspend override fun sendMultipleToolResults(results: List<ReceivedToolResult>): List<Message.Response>

Adds multiple tool results to the prompt and gets multiple LLM responses.

fun sendMultipleToolResults(results: List<ReceivedToolResult>, executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): List<Message.Response>

Sends multiple tool results for processing and returns the corresponding responses.

Adds multiple tool results to the prompt and gets multiple LLM responses.

Adds multiple tool results to the prompt and gets multiple LLM responses.

Link copied to clipboard
expect open suspend override fun sendToolResult(toolResult: ReceivedToolResult): Message.Response

Adds a tool result to the prompt and requests an LLM response.

fun sendToolResult(toolResult: ReceivedToolResult, executorService: ERROR CLASS: Symbol not found for ExecutorService?? = null): Message.Response

Sends the provided tool result for processing.

open suspend override fun sendToolResult(toolResult: ReceivedToolResult): Message.Response

Adds a tool result to the prompt and requests an LLM response.

open suspend override fun sendToolResult(toolResult: ReceivedToolResult): Message.Response

Adds a tool result to the prompt and requests an LLM response.

Link copied to clipboard

Provides an instance of AIAgentStateManager responsible for managing the state of an AI Agent. This function allows access to the state management operations for coordinating AI agent behavior.

Link copied to clipboard

Provides access to the AIAgentStorage instance.

Link copied to clipboard
expect open override fun store(key: AIAgentStorageKey<*>, value: Any)

Stores a feature in the agent's storage using the specified key.

open override fun store(key: AIAgentStorageKey<*>, value: Any)

Stores a feature in the agent's storage using the specified key.

open override fun store(key: AIAgentStorageKey<*>, value: Any)

Stores a feature in the agent's storage using the specified key.

Link copied to clipboard

Stores the given agent context data within the current AI agent context.

Link copied to clipboard

Retrieves the name of the strategy.

Link copied to clipboard
expect inline suspend fun <Input, Output> subtask(taskDescription: String, input: Input, tools: List<Tool<*, *>>? = null, llmModel: LLModel? = null, llmParams: LLMParams? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null): Output

expect open suspend override fun <Input, OutputTransformed> subtask(taskDescription: String, input: Input, tools: List<Tool<*, *>>?, finishTool: Tool<*, OutputTransformed>, llmModel: LLModel?, llmParams: LLMParams?, runMode: ToolCalls, assistantResponseRepeatMax: Int?, responseProcessor: ResponseProcessor?): OutputTransformed

Executes a subtask within the AI agent's functional context. This method enables the use of tools to achieve a specific task based on the input provided. It defines the task using an inline function, employs tools iteratively, and attempts to complete the subtask with a designated finishing tool.

expect open suspend override fun <Input, Output : Any> subtask(taskDescription: String, input: Input, outputClass: KClass<Output>, tools: List<Tool<*, *>>?, llmModel: LLModel?, llmParams: LLMParams?, runMode: ToolCalls, assistantResponseRepeatMax: Int?, responseProcessor: ResponseProcessor?): Output

Executes a subtask within the larger context of an AI agent's functional operation. This method allows you to define a specific task to be performed, using the given input, tools, and optional configuration parameters.

fun subtask(taskDescription: String): SubtaskBuilder
actual inline suspend fun <Input, Output> subtask(taskDescription: String, input: Input, tools: List<Tool<*, *>>?, llmModel: LLModel?, llmParams: LLMParams?, runMode: ToolCalls, assistantResponseRepeatMax: Int?): Output

open suspend override fun <Input, OutputTransformed> subtask(taskDescription: String, input: Input, tools: List<Tool<*, *>>?, finishTool: Tool<*, OutputTransformed>, llmModel: LLModel?, llmParams: LLMParams?, runMode: ToolCalls, assistantResponseRepeatMax: Int?, responseProcessor: ResponseProcessor?): OutputTransformed

Executes a subtask within the AI agent's functional context. This method enables the use of tools to achieve a specific task based on the input provided. It defines the task using an inline function, employs tools iteratively, and attempts to complete the subtask with a designated finishing tool.

open suspend override fun <Input, Output : Any> subtask(taskDescription: String, input: Input, outputClass: KClass<Output>, tools: List<Tool<*, *>>?, llmModel: LLModel?, llmParams: LLMParams?, runMode: ToolCalls, assistantResponseRepeatMax: Int?, responseProcessor: ResponseProcessor?): Output

Executes a subtask within the larger context of an AI agent's functional operation. This method allows you to define a specific task to be performed, using the given input, tools, and optional configuration parameters.

actual inline suspend fun <Input, Output> subtask(taskDescription: String, input: Input, tools: List<Tool<*, *>>?, llmModel: LLModel?, llmParams: LLMParams?, runMode: ToolCalls, assistantResponseRepeatMax: Int?): Output

open suspend override fun <Input, OutputTransformed> subtask(taskDescription: String, input: Input, tools: List<Tool<*, *>>?, finishTool: Tool<*, OutputTransformed>, llmModel: LLModel?, llmParams: LLMParams?, runMode: ToolCalls, assistantResponseRepeatMax: Int?, responseProcessor: ResponseProcessor?): OutputTransformed

Executes a subtask within the AI agent's functional context. This method enables the use of tools to achieve a specific task based on the input provided. It defines the task using an inline function, employs tools iteratively, and attempts to complete the subtask with a designated finishing tool.

open suspend override fun <Input, Output : Any> subtask(taskDescription: String, input: Input, outputClass: KClass<Output>, tools: List<Tool<*, *>>?, llmModel: LLModel?, llmParams: LLMParams?, runMode: ToolCalls, assistantResponseRepeatMax: Int?, responseProcessor: ResponseProcessor?): Output

Executes a subtask within the larger context of an AI agent's functional operation. This method allows you to define a specific task to be performed, using the given input, tools, and optional configuration parameters.

Link copied to clipboard
expect open suspend override fun <Input> subtaskWithVerification(taskDescription: String, input: Input, tools: List<Tool<*, *>>?, llmModel: LLModel?, llmParams: LLMParams?, runMode: ToolCalls, assistantResponseRepeatMax: Int?, responseProcessor: ResponseProcessor?): CriticResult<Input>

Executes a subtask with validation and verification of the results. The method defines a subtask for the AI agent using the provided input and additional parameters and ensures that the output is evaluated based on its correctness and feedback.

open suspend override fun <Input> subtaskWithVerification(taskDescription: String, input: Input, tools: List<Tool<*, *>>?, llmModel: LLModel?, llmParams: LLMParams?, runMode: ToolCalls, assistantResponseRepeatMax: Int?, responseProcessor: ResponseProcessor?): CriticResult<Input>

Executes a subtask with validation and verification of the results. The method defines a subtask for the AI agent using the provided input and additional parameters and ensures that the output is evaluated based on its correctness and feedback.

open suspend override fun <Input> subtaskWithVerification(taskDescription: String, input: Input, tools: List<Tool<*, *>>?, llmModel: LLModel?, llmParams: LLMParams?, runMode: ToolCalls, assistantResponseRepeatMax: Int?, responseProcessor: ResponseProcessor?): CriticResult<Input>

Executes a subtask with validation and verification of the results. The method defines a subtask for the AI agent using the provided input and additional parameters and ensures that the output is evaluated based on its correctness and feedback.

Link copied to clipboard
inline fun <T> AIAgentContext.with(executionInfo: AgentExecutionInfo, block: (executionInfo: AgentExecutionInfo, eventId: String) -> T): T

Executes a block of code with a modified execution context.

inline fun <T> AIAgentContext.with(partName: String, block: (executionInfo: AgentExecutionInfo, eventId: String) -> T): T

Executes a block of code with a modified execution context, creating a parent-child relationship between execution contexts for tracing purposes.