Package-level declarations

Types

Link copied to clipboard

Represents an abstract strategy for compressing the history of messages in a AIAgentLLMWriteSession. Different implementations define specific approaches to reducing the context size while maintaining key information.

Functions

Link copied to clipboard
Link copied to clipboard

A pass-through node that does nothing and returns input as output

Link copied to clipboard

A node that executes multiple tool calls. These calls can optionally be executed in parallel.

Link copied to clipboard

A node that calls a specific tool directly using the provided arguments.

Link copied to clipboard

A node that executes a tool call and returns its result.

Link copied to clipboard
fun <T> AIAgentSubgraphBuilderBase<*, *>.nodeLLMCompressHistory(name: String? = null, strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, preserveMemory: Boolean = true): AIAgentNodeDelegateBase<T, T>

A node that compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets a response with optional tool usage.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets multiple LLM responses with tool calls enabled.

Link copied to clipboard

A node that appends a user message to the LLM prompt and streams LLM response without transformation.

fun <T> AIAgentSubgraphBuilderBase<*, *>.nodeLLMRequestStreaming(name: String? = null, structureDefinition: StructuredDataDefinition? = null, transformStreamData: suspend (Flow<String>) -> Flow<T>): AIAgentNodeDelegateBase<String, Flow<T>>

A node that appends a user message to the LLM prompt, streams LLM response and transforms the stream data.

Link copied to clipboard

A node that appends a user message to the LLM prompt and requests structured data from the LLM with error correction capabilities.

Link copied to clipboard

A node that that appends a user message to the LLM prompt and forces the LLM to use a specific tool.

A node that appends a user message to the LLM prompt and forces the LLM to use a specific tool.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets a response where the LLM can only call tools.

Link copied to clipboard

A node that adds multiple tool results to the prompt and gets multiple LLM responses.

Link copied to clipboard

A node that adds a tool result to the prompt and requests an LLM response.

Link copied to clipboard

A node that adds messages to the LLM prompt using the provided prompt builder.

Link copied to clipboard

Creates an edge that filters assistant messages based on a custom condition and extracts their content.

Link copied to clipboard

Creates an edge that filters tool result messages for a specific tool and result condition.

Link copied to clipboard
suspend fun AIAgentLLMWriteSession.replaceHistoryWithTLDR(strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, preserveMemory: Boolean = true)

Rewrites LLM message history, leaving only user message and resulting TLDR.

Link copied to clipboard
Link copied to clipboard

Set the ai.koog.prompt.params.LLMParams.ToolChoice to ai.koog.prompt.params.LLMParams.ToolChoice.Auto to make LLM automatically decide between calling tools and generating text

Link copied to clipboard

Unset the ai.koog.prompt.params.LLMParams.ToolChoice. Mostly, if left unspecified, the default value of this parameter is ai.koog.prompt.params.LLMParams.ToolChoice.Auto