AIAgentFunctionalContextBase

Base class for functional/planner context implementations across all targets.

It inherits all shared behavior from AIAgentFunctionalContextBaseCommon.

Inheritors

Base class for functional/planner context implementations across all targets.

It inherits all shared behavior from AIAgentFunctionalContextBaseCommon.

Base class for functional/planner context implementations across all targets.

It inherits all shared behavior from AIAgentFunctionalContextBaseCommon.

Properties

Link copied to clipboard
open override val agentId: String

A unique identifier representing the current agent instance within the context.

open override val agentId: String

A unique identifier representing the current agent instance within the context.

open override val agentId: String

A unique identifier representing the current agent instance within the context.

Link copied to clipboard
open override val agentInput: Any?

Represents the input provided to the agent's execution.

open override val agentInput: Any?

Represents the input provided to the agent's execution.

open override val agentInput: Any?

Represents the input provided to the agent's execution.

Link copied to clipboard
open override val config: AIAgentConfig

Represents the configuration for an AI agent.

open override val config: AIAgentConfig

Represents the configuration for an AI agent.

open override val config: AIAgentConfig

Represents the configuration for an AI agent.

Link copied to clipboard
open override val environment: AIAgentEnvironment

Represents the environment in which the agent operates.

open override val environment: AIAgentEnvironment

Represents the environment in which the agent operates.

open override val environment: AIAgentEnvironment

Represents the environment in which the agent operates.

Link copied to clipboard

Represents the observability data associated with the AI Agent context.

Represents the observability data associated with the AI Agent context.

Represents the observability data associated with the AI Agent context.

Link copied to clipboard
open override val llm: AIAgentLLMContext

Represents the AI agent's LLM context, providing mechanisms for managing tools, prompts, and interaction with the execution environment. It ensures thread safety during concurrent read and write operations through the use of sessions.

open override val llm: AIAgentLLMContext

Represents the AI agent's LLM context, providing mechanisms for managing tools, prompts, and interaction with the execution environment. It ensures thread safety during concurrent read and write operations through the use of sessions.

open override val llm: AIAgentLLMContext

Represents the AI agent's LLM context, providing mechanisms for managing tools, prompts, and interaction with the execution environment. It ensures thread safety during concurrent read and write operations through the use of sessions.

Link copied to clipboard
open override val parentContext: AIAgentContext?

Represents the parent context of the AI Agent.

open override val parentContext: AIAgentContext?

Represents the parent context of the AI Agent.

open override val parentContext: AIAgentContext?

Represents the parent context of the AI Agent.

Link copied to clipboard
expect open override val pipeline: Pipeline

Represents the pipeline associated with the AI agent.

open override val pipeline: Pipeline

Represents the pipeline associated with the AI agent.

open override val pipeline: Pipeline

Represents the pipeline associated with the AI agent.

Link copied to clipboard
open override val runId: String

A unique identifier for the current session associated with the AI agent context. Used to track and differentiate sessions within the execution of the agent pipeline.

open override val runId: String

A unique identifier for the current session associated with the AI agent context. Used to track and differentiate sessions within the execution of the agent pipeline.

open override val runId: String

A unique identifier for the current session associated with the AI agent context. Used to track and differentiate sessions within the execution of the agent pipeline.

Link copied to clipboard

Manages and tracks the state of aт AI agent within the context of its execution.

Manages and tracks the state of aт AI agent within the context of its execution.

Manages and tracks the state of aт AI agent within the context of its execution.

Link copied to clipboard
open override val storage: AIAgentStorage

Concurrent-safe key-value storage for an agent, used to manage and persist data within the context of the AI agent stage execution. The storage property provides a thread-safe mechanism for sharing and storing data specific to the agent's operation.

open override val storage: AIAgentStorage

Concurrent-safe key-value storage for an agent, used to manage and persist data within the context of the AI agent stage execution. The storage property provides a thread-safe mechanism for sharing and storing data specific to the agent's operation.

open override val storage: AIAgentStorage

Concurrent-safe key-value storage for an agent, used to manage and persist data within the context of the AI agent stage execution. The storage property provides a thread-safe mechanism for sharing and storing data specific to the agent's operation.

Link copied to clipboard
open override val strategyName: String

Represents the name of the strategy being used in the current AI agent context.

open override val strategyName: String

Represents the name of the strategy being used in the current AI agent context.

open override val strategyName: String

Represents the name of the strategy being used in the current AI agent context.

Functions

Link copied to clipboard

Retrieves the unique identifier for the agent.

Link copied to clipboard
fun agentInput(): Any?

Retrieves the current agent input.

Link copied to clipboard
inline fun <T> AIAgentContext.agentInput(): T

Utility function to get AIAgentContext.agentInput and try to cast it to some expected type.

Link copied to clipboard

Casts the current instance of a Message.Response to a Message.Assistant. This function should only be used when it is guaranteed that the instance is of type Message.Assistant, as it will throw an exception if the type does not match.

fun ai/koog/prompt/message/Message.Response.asAssistantMessage(): ai/koog/prompt/message/Message.Assistant

Casts the current instance of a Message.Response to a Message.Assistant. This function should only be used when it is guaranteed that the instance is of type Message.Assistant, as it will throw an exception if the type does not match.

Casts the current instance of a Message.Response to a Message.Assistant. This function should only be used when it is guaranteed that the instance is of type Message.Assistant, as it will throw an exception if the type does not match.

Link copied to clipboard
fun ai/koog/prompt/message/Message.Response.asAssistantMessageOrNull(): ai/koog/prompt/message/Message.Assistant??

Attempts to cast a Message.Response instance to a Message.Assistant type.

Link copied to clipboard
suspend fun compressHistory(strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, preserveMemory: Boolean = true)

Compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.

fun <ToolArg, ToolResult> compressHistory(strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, preserveMemory: Boolean = true, executorService: ExecutorService? = null)

Compresses the historical data of a tool's operations using the specified compression strategy. This method is designed for optimizing memory usage by reducing the size of stored historical data.

suspend fun compressHistory(strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, preserveMemory: Boolean = true)

Compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.

suspend fun compressHistory(strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, preserveMemory: Boolean = true)

Compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.

Link copied to clipboard

Provides the configuration for an AI agent.

Link copied to clipboard

Checks if the list of Message.Response contains any instances of Message.Tool.Call.

open fun List<ai/koog/prompt/message/Message.Response>.containsToolCalls(): Boolean

Checks if the list of Message.Response contains any instances of Message.Tool.Call.

Checks if the list of Message.Response contains any instances of Message.Tool.Call.

Link copied to clipboard

Extension function to access the Debugger feature from an agent context.

Link copied to clipboard

Retrieves the current AI agent environment.

Link copied to clipboard
suspend fun executeMultipleTools(toolCalls: List<Message.Tool.Call>, parallelTools: Boolean = false): List<ReceivedToolResult>

Executes multiple tool calls and returns their results. These calls can optionally be executed in parallel.

fun executeMultipleTools(toolCalls: List<ERROR CLASS: Symbol not found for Message.Tool.Call>, parallelTools: Boolean, executorService: ExecutorService? = null): List<ReceivedToolResult>

Executes multiple tool calls either sequentially or in parallel based on the provided configurations.

suspend fun executeMultipleTools(toolCalls: List<ai/koog/prompt/message/Message.Tool.Call>, parallelTools: Boolean = false): List<ReceivedToolResult>

Executes multiple tool calls and returns their results. These calls can optionally be executed in parallel.

suspend fun executeMultipleTools(toolCalls: List<Message.Tool.Call>, parallelTools: Boolean = false): List<ReceivedToolResult>

Executes multiple tool calls and returns their results. These calls can optionally be executed in parallel.

Link copied to clipboard
suspend fun <ToolArg, TResult> executeSingleTool(tool: Tool<ToolArg, TResult>, toolArgs: ToolArg, doUpdatePrompt: Boolean = true): SafeTool.Result<TResult>

Calls a specific tool directly using the provided arguments.

fun <ToolArg, ToolResult> executeSingleTool(tool: ERROR CLASS: Symbol not found for Tool<ToolArg, ToolResult>, toolArgs: ToolArg, doUpdatePrompt: Boolean, executorService: ExecutorService? = null): SafeTool.Result<ToolResult>

Executes a single tool with the specified arguments.

suspend fun <ToolArg, TResult> executeSingleTool(tool: ai/koog/agents/core/tools/Tool<ToolArg, TResult>, toolArgs: ToolArg, doUpdatePrompt: Boolean = true): SafeTool.Result<TResult>

Calls a specific tool directly using the provided arguments.

suspend fun <ToolArg, TResult> executeSingleTool(tool: Tool<ToolArg, TResult>, toolArgs: ToolArg, doUpdatePrompt: Boolean = true): SafeTool.Result<TResult>

Calls a specific tool directly using the provided arguments.

Link copied to clipboard

Executes a tool call and returns the result.

fun executeTool(toolCall: ERROR CLASS: Symbol not found for Message.Tool.Call, executorService: ExecutorService? = null): ReceivedToolResult

Executes the specified tool call using an optional executor service.

suspend fun executeTool(toolCall: ai/koog/prompt/message/Message.Tool.Call): ReceivedToolResult

Executes a tool call and returns the result.

Executes a tool call and returns the result.

Link copied to clipboard

Retrieves the execution information of the agent.

Link copied to clipboard

Extracts a list of tool call messages from a given list of response messages.

fun extractToolCalls(response: List<ai/koog/prompt/message/Message.Response>): List<ai/koog/prompt/message/Message.Tool.Call>

Extracts a list of tool call messages from a given list of response messages.

Extracts a list of tool call messages from a given list of response messages.

Link copied to clipboard

Retrieves a feature from the AIAgentContext.pipeline associated with this context using the specified key.

Link copied to clipboard

Retrieves a feature from the AIAgentContext.pipeline associated with this context using the specified key or throws an exception if it is not available.

Link copied to clipboard
open override fun <T> get(key: AIAgentStorageKey<*>): T?

Retrieves data from the agent's storage using the specified key.

open override fun <T> get(key: AIAgentStorageKey<*>): T?

Retrieves data from the agent's storage using the specified key.

open override fun <T> get(key: AIAgentStorageKey<*>): T?

Retrieves data from the agent's storage using the specified key.

Link copied to clipboard

Retrieves the agent-specific context data associated with the current instance.

Link copied to clipboard
open suspend override fun getHistory(): List<Message>

Retrieves the history of messages exchanged during the agent's execution.

fun getHistory(executorService: ExecutorService? = null): List<ERROR CLASS: Symbol not found for Message>

open suspend override fun getHistory(): List<ai/koog/prompt/message/Message>

Retrieves the history of messages exchanged during the agent's execution.

open suspend override fun getHistory(): List<Message>

Retrieves the history of messages exchanged during the agent's execution.

Link copied to clipboard
suspend fun latestTokenUsage(): Int

Retrieves the latest token usage from the prompt within the LLM session.

fun latestTokenUsage(executorService: ExecutorService? = null): Int

Retrieves the most recent token usage count synchronously.

suspend fun latestTokenUsage(): Int

Retrieves the latest token usage from the prompt within the LLM session.

suspend fun latestTokenUsage(): Int

Retrieves the latest token usage from the prompt within the LLM session.

Link copied to clipboard

Returns the current instance of AIAgentLLMContext.

Link copied to clipboard

Executes the provided action if the given response is of type Message.Assistant.

fun onAssistantMessage(response: ai/koog/prompt/message/Message.Response, action: (ai/koog/prompt/message/Message.Assistant) -> Unit)

Executes the provided action if the given response is of type Message.Assistant.

Executes the provided action if the given response is of type Message.Assistant.

Link copied to clipboard

Filters the provided list of response messages to include only assistant messages and, if the filtered list is not empty, performs the specified action with the filtered list.

fun onMultipleAssistantMessages(response: List<ai/koog/prompt/message/Message.Response>, action: (List<ai/koog/prompt/message/Message.Assistant>) -> Unit)

Filters the provided list of response messages to include only assistant messages and, if the filtered list is not empty, performs the specified action with the filtered list.

Filters the provided list of response messages to include only assistant messages and, if the filtered list is not empty, performs the specified action with the filtered list.

Link copied to clipboard

Invokes the provided action when multiple tool call messages are found within a given list of response messages. Filters the list of responses to include only instances of Message.Tool.Call and executes the action on the filtered list if it is not empty.

fun onMultipleToolCalls(response: List<ai/koog/prompt/message/Message.Response>, action: (List<ai/koog/prompt/message/Message.Tool.Call>) -> Unit)

Invokes the provided action when multiple tool call messages are found within a given list of response messages. Filters the list of responses to include only instances of Message.Tool.Call and executes the action on the filtered list if it is not empty.

Invokes the provided action when multiple tool call messages are found within a given list of response messages. Filters the list of responses to include only instances of Message.Tool.Call and executes the action on the filtered list if it is not empty.

Link copied to clipboard

Builds and returns an instance of the AIAgentFunctionalPipeline.

Link copied to clipboard
open override fun remove(key: AIAgentStorageKey<*>): Boolean

Removes a feature or data associated with the specified key from the agent's storage.

open override fun remove(key: AIAgentStorageKey<*>): Boolean

Removes a feature or data associated with the specified key from the agent's storage.

open override fun remove(key: AIAgentStorageKey<*>): Boolean

Removes a feature or data associated with the specified key from the agent's storage.

Link copied to clipboard

Removes the agent-specific context data associated with the current context.

Link copied to clipboard
suspend fun requestLLM(message: String, allowToolCalls: Boolean = true): Message.Response

Sends a message to a Large Language Model (LLM) and optionally allows the use of tools during the LLM interaction. The message becomes part of the current prompt, and the LLM's response is processed accordingly, either with or without tool integrations based on the provided parameters.

fun requestLLM(message: String, allowToolCalls: Boolean = true, executorService: ExecutorService? = null): ERROR CLASS: Symbol not found for Message.Response

Sends a request to the Large Language Model (LLM) and retrieves its response.

suspend fun requestLLM(message: String, allowToolCalls: Boolean = true): ai/koog/prompt/message/Message.Response

Sends a message to a Large Language Model (LLM) and optionally allows the use of tools during the LLM interaction. The message becomes part of the current prompt, and the LLM's response is processed accordingly, either with or without tool integrations based on the provided parameters.

suspend fun requestLLM(message: String, allowToolCalls: Boolean = true): Message.Response

Sends a message to a Large Language Model (LLM) and optionally allows the use of tools during the LLM interaction. The message becomes part of the current prompt, and the LLM's response is processed accordingly, either with or without tool integrations based on the provided parameters.

Link copied to clipboard
suspend fun requestLLMForceOneTool(message: String, tool: Tool<*, *>): Message.Response

Sends a message to a Large Language Model (LLM) and forces it to use a specific tool. The message becomes part of the current prompt, and the LLM is instructed to use only the specified tool.

fun requestLLMForceOneTool(message: String, tool: ERROR CLASS: Symbol not found for Tool<*, *>, executorService: ExecutorService? = null): ERROR CLASS: Symbol not found for Message.Response

Sends a request to the LLM (Large Language Model) forcing the use of a specified tool and returns the response.

fun requestLLMForceOneTool(message: String, tool: ERROR CLASS: Symbol not found for ToolDescriptor, executorService: ExecutorService? = null): ERROR CLASS: Symbol not found for Message.Response

Sends a request to the LLM (Large Language Model) system using a specified tool, ensuring the use of exactly one tool in the response generation process.

suspend fun requestLLMForceOneTool(message: String, tool: ai/koog/agents/core/tools/Tool<*, *>): ai/koog/prompt/message/Message.Response
suspend fun requestLLMForceOneTool(message: String, tool: ai/koog/agents/core/tools/ToolDescriptor): ai/koog/prompt/message/Message.Response

Sends a message to a Large Language Model (LLM) and forces it to use a specific tool. The message becomes part of the current prompt, and the LLM is instructed to use only the specified tool.

suspend fun requestLLMForceOneTool(message: String, tool: Tool<*, *>): Message.Response

Sends a message to a Large Language Model (LLM) and forces it to use a specific tool. The message becomes part of the current prompt, and the LLM is instructed to use only the specified tool.

Link copied to clipboard

Sends a message to a Large Language Model (LLM) and gets multiple LLM responses with tool calls enabled. The message becomes part of the current prompt, and multiple responses from the LLM are collected.

fun requestLLMMultiple(message: String, executorService: ExecutorService? = null): List<ERROR CLASS: Symbol not found for Message.Response>

Sends a request to the Large Language Model (LLM) and retrieves multiple responses.

suspend fun requestLLMMultiple(message: String): List<ai/koog/prompt/message/Message.Response>

Sends a message to a Large Language Model (LLM) and gets multiple LLM responses with tool calls enabled. The message becomes part of the current prompt, and multiple responses from the LLM are collected.

Sends a message to a Large Language Model (LLM) and gets multiple LLM responses with tool calls enabled. The message becomes part of the current prompt, and multiple responses from the LLM are collected.

Link copied to clipboard

Sends a message to a Large Language Model (LLM) that will only call tools without generating text responses. The message becomes part of the current prompt, and the LLM is instructed to only use tools.

fun requestLLMOnlyCallingTools(message: String, executorService: ExecutorService? = null): ERROR CLASS: Symbol not found for Message.Response

Executes a request to the LLM, restricting the process to only calling external tools as needed.

suspend fun requestLLMOnlyCallingTools(message: String): ai/koog/prompt/message/Message.Response

Sends a message to a Large Language Model (LLM) that will only call tools without generating text responses. The message becomes part of the current prompt, and the LLM is instructed to only use tools.

Sends a message to a Large Language Model (LLM) that will only call tools without generating text responses. The message becomes part of the current prompt, and the LLM is instructed to only use tools.

Link copied to clipboard
suspend fun requestLLMStreaming(message: String, structureDefinition: StructureDefinition? = null): Flow<StreamFrame>

Sends a message to a Large Language Model (LLM) and streams the LLM response. The message becomes part of the current prompt, and the LLM's response is streamed as it's generated.

fun requestLLMStreaming(message: String, structureDefinition: ERROR CLASS: Symbol not found for StructureDefinition??, executorService: ExecutorService? = null): Flow.Publisher<ERROR CLASS: Symbol not found for StreamFrame>

Sends a request to the Language Learning Model (LLM) for streaming data.

suspend fun requestLLMStreaming(message: String, structureDefinition: ai/koog/prompt/structure/StructureDefinition?? = null): kotlinx/coroutines/flow/Flow<ai/koog/prompt/streaming/StreamFrame>

Sends a message to a Large Language Model (LLM) and streams the LLM response. The message becomes part of the current prompt, and the LLM's response is streamed as it's generated.

suspend fun requestLLMStreaming(message: String, structureDefinition: StructureDefinition? = null): Flow<StreamFrame>

Sends a message to a Large Language Model (LLM) and streams the LLM response. The message becomes part of the current prompt, and the LLM's response is streamed as it's generated.

Link copied to clipboard
inline suspend fun <T> requestLLMStructured(message: String, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>

Sends a structured request to the Large Language Model (LLM) and processes the response.

suspend fun <T : Any> requestLLMStructured(message: String, clazz: KClass<T>, examples: List<T> = emptyList(), fixingParser: ERROR CLASS: Symbol not found for StructureFixingParser?? = null): ERROR CLASS: Symbol not found for Result<ERROR CLASS: Symbol not found for StructuredResponse<T>>
inline suspend fun <T> requestLLMStructured(message: String, examples: List<T> = emptyList(), fixingParser: ai/koog/prompt/executor/model/StructureFixingParser?? = null): kotlin/Result<ai/koog/prompt/structure/StructuredResponse<T>>

Sends a structured request to the Large Language Model (LLM) and processes the response.

inline suspend fun <T> requestLLMStructured(message: String, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>

Sends a structured request to the Large Language Model (LLM) and processes the response.

Link copied to clipboard

Provides the root context of the current agent. If the root context is not defined, this function defaults to returning the current instance.

Link copied to clipboard
fun runId(): String

Retrieves the current run identifier.

Link copied to clipboard

Adds multiple tool results to the prompt and gets multiple LLM responses.

fun sendMultipleToolResults(results: List<ReceivedToolResult>, executorService: ExecutorService? = null): List<ERROR CLASS: Symbol not found for Message.Response>

Sends multiple tool results for processing and returns the corresponding responses.

suspend fun sendMultipleToolResults(results: List<ReceivedToolResult>): List<ai/koog/prompt/message/Message.Response>

Adds multiple tool results to the prompt and gets multiple LLM responses.

Adds multiple tool results to the prompt and gets multiple LLM responses.

Link copied to clipboard

Adds a tool result to the prompt and requests an LLM response.

fun sendToolResult(toolResult: ReceivedToolResult, executorService: ExecutorService? = null): ERROR CLASS: Symbol not found for Message.Response

Sends the provided tool result for processing.

suspend fun sendToolResult(toolResult: ReceivedToolResult): ai/koog/prompt/message/Message.Response

Adds a tool result to the prompt and requests an LLM response.

Adds a tool result to the prompt and requests an LLM response.

Link copied to clipboard

Provides an instance of AIAgentStateManager responsible for managing the state of an AI Agent. This function allows access to the state management operations for coordinating AI agent behavior.

Link copied to clipboard

Provides access to the AIAgentStorage instance.

Link copied to clipboard
open override fun store(key: AIAgentStorageKey<*>, value: Any)

Stores a feature in the agent's storage using the specified key.

open override fun store(key: AIAgentStorageKey<*>, value: Any)

Stores a feature in the agent's storage using the specified key.

open override fun store(key: AIAgentStorageKey<*>, value: Any)

Stores a feature in the agent's storage using the specified key.

Link copied to clipboard

Stores the given agent context data within the current AI agent context.

Link copied to clipboard

Retrieves the name of the strategy.

Link copied to clipboard
inline suspend fun <Output : Any> subtask(taskDescription: String, tools: List<Tool<*, *>>? = null, llmModel: LLModel? = null, llmParams: LLMParams? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, responseProcessor: ResponseProcessor? = null): Output
suspend fun <Output : Any> subtask(taskDescription: String, outputClass: KClass<Output>, tools: List<Tool<*, *>>? = null, llmModel: LLModel? = null, llmParams: LLMParams? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, responseProcessor: ResponseProcessor? = null): Output

Executes a subtask within the larger context of an AI agent's functional operation. This method allows defining a specific task to be performed with the given input, tools, and optional configuration.

suspend fun <OutputTransformed> subtask(taskDescription: String, tools: List<Tool<*, *>>? = null, finishTool: Tool<*, OutputTransformed>, llmModel: LLModel? = null, llmParams: LLMParams? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, responseProcessor: ResponseProcessor? = null): OutputTransformed

Executes a subtask within the AI agent's functional context. This method enables the use of tools to achieve a specific task based on the input provided.

fun subtask(taskDescription: String): SubtaskBuilder

inline suspend fun <Output : Any> subtask(taskDescription: String, tools: List<ai/koog/agents/core/tools/Tool<*, *>>? = null, llmModel: ai/koog/prompt/llm/LLModel?? = null, llmParams: ai/koog/prompt/params/LLMParams?? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, responseProcessor: ai/koog/prompt/processor/ResponseProcessor?? = null): Output
suspend fun <Output : Any> subtask(taskDescription: String, outputClass: KClass<Output>, tools: List<ai/koog/agents/core/tools/Tool<*, *>>? = null, llmModel: ai/koog/prompt/llm/LLModel?? = null, llmParams: ai/koog/prompt/params/LLMParams?? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, responseProcessor: ai/koog/prompt/processor/ResponseProcessor?? = null): Output

Executes a subtask within the larger context of an AI agent's functional operation. This method allows defining a specific task to be performed with the given input, tools, and optional configuration.

suspend fun <OutputTransformed> subtask(taskDescription: String, tools: List<ai/koog/agents/core/tools/Tool<*, *>>? = null, finishTool: ai/koog/agents/core/tools/Tool<*, OutputTransformed>, llmModel: ai/koog/prompt/llm/LLModel?? = null, llmParams: ai/koog/prompt/params/LLMParams?? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, responseProcessor: ai/koog/prompt/processor/ResponseProcessor?? = null): OutputTransformed

Executes a subtask within the AI agent's functional context. This method enables the use of tools to achieve a specific task based on the input provided.

inline suspend fun <Output : Any> subtask(taskDescription: String, tools: List<Tool<*, *>>? = null, llmModel: LLModel? = null, llmParams: LLMParams? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, responseProcessor: ResponseProcessor? = null): Output
suspend fun <Output : Any> subtask(taskDescription: String, outputClass: KClass<Output>, tools: List<Tool<*, *>>? = null, llmModel: LLModel? = null, llmParams: LLMParams? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, responseProcessor: ResponseProcessor? = null): Output

Executes a subtask within the larger context of an AI agent's functional operation. This method allows defining a specific task to be performed with the given input, tools, and optional configuration.

suspend fun <OutputTransformed> subtask(taskDescription: String, tools: List<Tool<*, *>>? = null, finishTool: Tool<*, OutputTransformed>, llmModel: LLModel? = null, llmParams: LLMParams? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, responseProcessor: ResponseProcessor? = null): OutputTransformed

Executes a subtask within the AI agent's functional context. This method enables the use of tools to achieve a specific task based on the input provided.

Link copied to clipboard
suspend fun subtaskWithVerification(taskDescription: String, tools: List<Tool<*, *>>? = null, llmModel: LLModel? = null, llmParams: LLMParams? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, responseProcessor: ResponseProcessor? = null): CriticResult<String>

Executes a subtask with validation and verification of the results. The method defines a subtask for the AI agent using the provided input and additional parameters and ensures that the output is evaluated based on its correctness and feedback.

suspend fun subtaskWithVerification(taskDescription: String, tools: List<ai/koog/agents/core/tools/Tool<*, *>>? = null, llmModel: ai/koog/prompt/llm/LLModel?? = null, llmParams: ai/koog/prompt/params/LLMParams?? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, responseProcessor: ai/koog/prompt/processor/ResponseProcessor?? = null): CriticResult<String>

Executes a subtask with validation and verification of the results. The method defines a subtask for the AI agent using the provided input and additional parameters and ensures that the output is evaluated based on its correctness and feedback.

suspend fun subtaskWithVerification(taskDescription: String, tools: List<Tool<*, *>>? = null, llmModel: LLModel? = null, llmParams: LLMParams? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, responseProcessor: ResponseProcessor? = null): CriticResult<String>

Executes a subtask with validation and verification of the results. The method defines a subtask for the AI agent using the provided input and additional parameters and ensures that the output is evaluated based on its correctness and feedback.

Link copied to clipboard
inline fun <T> AIAgentContext.with(executionInfo: AgentExecutionInfo, block: (executionInfo: AgentExecutionInfo, eventId: String) -> T): T

Executes a block of code with a modified execution context.

inline fun <T> AIAgentContext.with(partName: String, block: (executionInfo: AgentExecutionInfo, eventId: String) -> T): T

Executes a block of code with a modified execution context, creating a parent-child relationship between execution contexts for tracing purposes.