Package-level declarations

Types

Link copied to clipboard

Represents an abstract strategy for compressing the history of messages in a AIAgentLLMWriteSession. Different implementations define specific approaches to reducing the context size while maintaining key information.

Link copied to clipboard
data class ModeratedMessage(val message: Message, val moderationResult: ModerationResult)

Represents a message that has undergone moderation and the result of the moderation.

Functions

Link copied to clipboard

Clears the history of messages in the current AI Agent LLM Write Session.

Link copied to clipboard
fun AIAgentLLMWriteSession.dropLastNMessages(n: Int, preserveSystemMessages: Boolean = true)

Removes the last n messages from the current prompt in the write session.

Link copied to clipboard

Drops all trailing tool call messages from the current prompt

Link copied to clipboard
fun AIAgentLLMWriteSession.leaveLastNMessages(n: Int, preserveSystemMessages: Boolean = true)

Keeps only the last N messages in the session's prompt by removing all earlier messages.

Link copied to clipboard
fun AIAgentLLMWriteSession.leaveMessagesFromTimestamp(timestamp: Instant, preserveSystemMessages: Boolean = true)

Removes all messages from the current session's prompt that have a timestamp earlier than the specified timestamp.

Link copied to clipboard

A pass-through node that does nothing and returns input as output

Link copied to clipboard

A node that executes multiple tool calls. These calls can optionally be executed in parallel.

Link copied to clipboard

Creates a node in the AI agent subgraph that processes a collection of tool calls, executes them, and sends back the results to the downstream process. The tools can be executed either in parallel or sequentially based on the provided configuration.

Link copied to clipboard

A node that calls a specific tool directly using the provided arguments.

Link copied to clipboard

A node that executes a tool call and returns its result.

Link copied to clipboard
inline fun <T> AIAgentSubgraphBuilderBase<*, *>.nodeLLMCompressHistory(name: String? = null, strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, retrievalModel: LLModel? = null, preserveMemory: Boolean = true): AIAgentNodeDelegate<T, T>

A node that compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.

Link copied to clipboard
fun AIAgentSubgraphBuilderBase<*, *>.nodeLLMModerateMessage(name: String? = null, moderatingModel: LLModel? = null, includeCurrentPrompt: Boolean = false): AIAgentNodeDelegate<Message, ModeratedMessage>

A node that moderates only a single input message using a specified language model.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets a response with optional tool usage.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets multiple LLM responses with tool calls enabled.

Link copied to clipboard

A node that appends a user message to the LLM prompt and streams LLM response without transformation.

fun <T> AIAgentSubgraphBuilderBase<*, *>.nodeLLMRequestStreaming(name: String? = null, structureDefinition: StructuredDataDefinition? = null, transformStreamData: suspend (Flow<StreamFrame>) -> Flow<T>): AIAgentNodeDelegate<String, Flow<T>>

A node that appends a user message to the LLM prompt, streams LLM response and transforms the stream data.

Link copied to clipboard

A node that performs LLM streaming, collects all stream frames, converts them to response messages, and updates the prompt with the results.

Link copied to clipboard
inline fun <T> AIAgentSubgraphBuilderBase<*, *>.nodeLLMRequestStructured(name: String? = null, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): AIAgentNodeDelegate<String, Result<StructuredResponse<T>>>

A node that appends a user message to the LLM prompt and requests structured data from the LLM with optional error correction capabilities.

Link copied to clipboard

A node that that appends a user message to the LLM prompt and forces the LLM to use a specific tool.

A node that appends a user message to the LLM prompt and forces the LLM to use a specific tool.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets a response where the LLM can only call tools.

Link copied to clipboard

A node that adds multiple tool results to the prompt and gets multiple LLM responses.

Link copied to clipboard

A node that adds a tool result to the prompt and requests an LLM response.

Link copied to clipboard

Creates a node that sets up a structured output for an AI agent subgraph.

Link copied to clipboard
inline fun <T> AIAgentSubgraphBuilderBase<*, *>.nodeUpdatePrompt(name: String? = null, noinline body: PromptBuilder.() -> Unit): AIAgentNodeDelegate<T, T>

A node that adds messages to the LLM prompt using the provided prompt builder. The input is passed as it is to the output.

Link copied to clipboard

Creates an edge that filters assistant messages based on a custom condition and extracts their content.

Link copied to clipboard

Creates an edge that filters assistant messages based on a custom condition and provides access to media content.

Link copied to clipboard

Defines a handler to process failure cases in a directed edge strategy by applying a condition to filter intermediate results of type SafeTool.Result.Failure. This method is used to specialize processing for failure results and to propagate or transform them based on the provided condition.

Link copied to clipboard

Creates an edge that filters assistant messages based on a custom condition and extracts their content.

Link copied to clipboard
Link copied to clipboard

Filters and transforms the intermediate outputs of the AI agent node based on the success results of a tool operation.

Link copied to clipboard

Creates an edge that filters tool result messages for a specific tool and result condition.

Link copied to clipboard
suspend fun AIAgentLLMWriteSession.replaceHistoryWithTLDR(strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, preserveMemory: Boolean = true)

Rewrites LLM message history, leaving only user message and resulting TLDR.

Link copied to clipboard
Link copied to clipboard

Set the ai.koog.prompt.params.LLMParams.ToolChoice to ai.koog.prompt.params.LLMParams.ToolChoice.Auto to make LLM automatically decide between calling tools and generating text

Link copied to clipboard

Unset the ai.koog.prompt.params.LLMParams.ToolChoice. Mostly, if left unspecified, the default value of this parameter is ai.koog.prompt.params.LLMParams.ToolChoice.Auto