Package-level declarations

Types

Link copied to clipboard

Represents an abstract strategy for compressing the history of messages in a AIAgentLLMWriteSession. Different implementations define specific approaches to reducing the context size while maintaining key information.

Link copied to clipboard
data class ModeratedMessage(val message: Message, val moderationResult: ModerationResult)

Represents a message that has undergone moderation and the result of the moderation.

Functions

Link copied to clipboard

Casts the current instance of a Message.Response to a Message.Assistant. This function should only be used when it is guaranteed that the instance is of type Message.Assistant, as it will throw an exception if the type does not match.

Link copied to clipboard

Attempts to cast a Message.Response instance to a Message.Assistant type.

Link copied to clipboard

Clears the history of messages in the current AI Agent LLM Write Session.

Link copied to clipboard
suspend fun AIAgentFunctionalContext.compressHistory(strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, preserveMemory: Boolean = true)

Compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.

Link copied to clipboard

Checks if the list of Message.Response contains any instances of Message.Tool.Call.

Link copied to clipboard
fun AIAgentLLMWriteSession.dropLastNMessages(n: Int, preserveSystemMessages: Boolean = true)

Removes the last n messages from the current prompt in the write session.

Link copied to clipboard

Drops all trailing tool call messages from the current prompt

Link copied to clipboard

Executes multiple tool calls and returns their results. These calls can optionally be executed in parallel.

Link copied to clipboard

Calls a specific tool directly using the provided arguments.

Link copied to clipboard

Executes a tool call and returns the result.

Link copied to clipboard

Extracts a list of tool call messages from a given list of response messages.

Link copied to clipboard

Retrieves the latest token usage from the prompt within the LLM session.

Link copied to clipboard
fun AIAgentLLMWriteSession.leaveLastNMessages(n: Int, preserveSystemMessages: Boolean = true)

Keeps only the last N messages in the session's prompt by removing all earlier messages.

Link copied to clipboard
fun AIAgentLLMWriteSession.leaveMessagesFromTimestamp(timestamp: Instant, preserveSystemMessages: Boolean = true)

Removes all messages from the current session's prompt that have a timestamp earlier than the specified timestamp.

Link copied to clipboard
inline fun <T> AIAgentSubgraphBuilderBase<*, *>.nodeAppendPrompt(name: String? = null, noinline body: PromptBuilder.() -> Unit): AIAgentNodeDelegate<T, T>

A node that adds messages to the LLM prompt using the provided prompt builder. The input is passed as it is to the output.

Link copied to clipboard

A pass-through node that does nothing and returns input as output

Link copied to clipboard

A node that executes multiple tool calls. These calls can optionally be executed in parallel.

Link copied to clipboard

Creates a node in the AI agent subgraph that processes a collection of tool calls, executes them, and sends back the results to the downstream process. The tools can be executed either in parallel or sequentially based on the provided configuration.

Link copied to clipboard

A node that calls a specific tool directly using the provided arguments.

Link copied to clipboard

A node that executes a tool call and returns its result.

Link copied to clipboard
inline fun <T> AIAgentSubgraphBuilderBase<*, *>.nodeLLMCompressHistory(name: String? = null, strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, retrievalModel: LLModel? = null, preserveMemory: Boolean = true): AIAgentNodeDelegate<T, T>

A node that compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.

Link copied to clipboard
fun AIAgentSubgraphBuilderBase<*, *>.nodeLLMModerateMessage(name: String? = null, moderatingModel: LLModel? = null, includeCurrentPrompt: Boolean = false): AIAgentNodeDelegate<Message, ModeratedMessage>

A node that moderates only a single input message using a specified language model.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets a response with optional tool usage.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets multiple LLM responses with tool calls enabled.

Link copied to clipboard

A node that appends a user message to the LLM prompt and streams LLM response without transformation.

fun <T> AIAgentSubgraphBuilderBase<*, *>.nodeLLMRequestStreaming(name: String? = null, structureDefinition: StructuredDataDefinition? = null, transformStreamData: suspend (Flow<StreamFrame>) -> Flow<T>): AIAgentNodeDelegate<String, Flow<T>>

A node that appends a user message to the LLM prompt, streams LLM response and transforms the stream data.

Link copied to clipboard

A node that performs LLM streaming, collects all stream frames, converts them to response messages, and updates the prompt with the results.

Link copied to clipboard
inline fun <T> AIAgentSubgraphBuilderBase<*, *>.nodeLLMRequestStructured(name: String? = null, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): AIAgentNodeDelegate<String, Result<StructuredResponse<T>>>

A node that appends a user message to the LLM prompt and requests structured data from the LLM with optional error correction capabilities.

Link copied to clipboard

A node that that appends a user message to the LLM prompt and forces the LLM to use a specific tool.

A node that appends a user message to the LLM prompt and forces the LLM to use a specific tool.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets a response where the LLM can only call tools.

Link copied to clipboard

A node that adds multiple tool results to the prompt and gets multiple LLM responses.

Link copied to clipboard

A node that adds a tool result to the prompt and requests an LLM response.

Link copied to clipboard

Creates a node that sets up a structured output for an AI agent subgraph.

Link copied to clipboard
inline fun <T> AIAgentSubgraphBuilderBase<*, *>.nodeUpdatePrompt(name: String? = null, noinline body: PromptBuilder.() -> Unit): AIAgentNodeDelegate<T, T>

A node that adds messages to the LLM prompt using the provided prompt builder. The input is passed as it is to the output.

Link copied to clipboard

Creates an edge that filters assistant messages based on a custom condition and extracts their content.

Executes the provided action if the given response is of type Message.Assistant.

Link copied to clipboard

Creates an edge that filters assistant messages based on a custom condition and provides access to media content.

Link copied to clipboard

Defines a handler to process failure cases in a directed edge strategy by applying a condition to filter intermediate results of type SafeTool.Result.Failure. This method is used to specialize processing for failure results and to propagate or transform them based on the provided condition.

Link copied to clipboard

Creates an edge that filters assistant messages based on a custom condition and extracts their content.

Filters the provided list of response messages to include only assistant messages and, if the filtered list is not empty, performs the specified action with the filtered list.

Link copied to clipboard

Creates an edge that filters lists of tool call messages based on a custom condition.

Invokes the provided action when multiple tool call messages are found within a given list of response messages. Filters the list of responses to include only instances of Message.Tool.Call and executes the action on the filtered list if it is not empty.

Link copied to clipboard

Filters and transforms the intermediate outputs of the AI agent node based on the success results of a tool operation.

Link copied to clipboard

Creates an edge that filters tool result messages for a specific tool and result condition.

Link copied to clipboard
suspend fun AIAgentLLMWriteSession.replaceHistoryWithTLDR(strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, preserveMemory: Boolean = true)

Rewrites LLM message history, leaving only user message and resulting TLDR.

Link copied to clipboard
suspend fun AIAgentFunctionalContext.requestLLM(message: String, allowToolCalls: Boolean = true): Message.Response

Sends a message to a Large Language Model (LLM) and optionally allows the use of tools during the LLM interaction. The message becomes part of the current prompt, and the LLM's response is processed accordingly, either with or without tool integrations based on the provided parameters.

Link copied to clipboard

Sends a message to a Large Language Model (LLM) and forces it to use a specific tool. The message becomes part of the current prompt, and the LLM is instructed to use only the specified tool.

Link copied to clipboard

Sends a message to a Large Language Model (LLM) and gets multiple LLM responses with tool calls enabled. The message becomes part of the current prompt, and multiple responses from the LLM are collected.

Link copied to clipboard

Sends a message to a Large Language Model (LLM) that will only call tools without generating text responses. The message becomes part of the current prompt, and the LLM is instructed to only use tools.

Link copied to clipboard
suspend fun AIAgentFunctionalContext.requestLLMStreaming(message: String, structureDefinition: StructuredDataDefinition? = null): Flow<StreamFrame>

Sends a message to a Large Language Model (LLM) and streams the LLM response. The message becomes part of the current prompt, and the LLM's response is streamed as it's generated.

Link copied to clipboard
inline suspend fun <T> AIAgentFunctionalContext.requestLLMStructured(message: String, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>

Sends a message to a Large Language Model (LLM) and requests structured data from the LLM with error correction capabilities. The message becomes part of the current prompt, and the LLM's response is processed to extract structured data.

Link copied to clipboard

Adds multiple tool results to the prompt and gets multiple LLM responses.

Link copied to clipboard

Adds a tool result to the prompt and requests an LLM response.

Link copied to clipboard
Link copied to clipboard

Set the ai.koog.prompt.params.LLMParams.ToolChoice to ai.koog.prompt.params.LLMParams.ToolChoice.Auto to make LLM automatically decide between calling tools and generating text

Link copied to clipboard

Unset the ai.koog.prompt.params.LLMParams.ToolChoice. Mostly, if left unspecified, the default value of this parameter is ai.koog.prompt.params.LLMParams.ToolChoice.Auto