Package-level declarations
Types
Represents an abstract strategy for compressing the history of messages in a AIAgentLLMWriteSession
. Different implementations define specific approaches to reducing the context size while maintaining key information.
Represents a message that has undergone moderation and the result of the moderation.
Functions
Clears the history of messages in the current AI Agent LLM Write Session.
Removes the last n
messages from the current prompt in the write session.
Drops all trailing tool call messages from the current prompt
Keeps only the last N messages in the session's prompt by removing all earlier messages.
Removes all messages from the current session's prompt that have a timestamp earlier than the specified timestamp.
A pass-through node that does nothing and returns input as output
A node that executes multiple tool calls. These calls can optionally be executed in parallel.
Creates a node in the AI agent subgraph that processes a collection of tool calls, executes them, and sends back the results to the downstream process. The tools can be executed either in parallel or sequentially based on the provided configuration.
A node that calls a specific tool directly using the provided arguments.
A node that executes a tool call and returns its result.
A node that compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.
A node that moderates only a single input message using a specified language model.
A node that appends a user message to the LLM prompt and gets a response with optional tool usage.
A node that appends a user message to the LLM prompt and gets multiple LLM responses with tool calls enabled.
A node that appends a user message to the LLM prompt and streams LLM response without transformation.
A node that appends a user message to the LLM prompt, streams LLM response and transforms the stream data.
A node that performs LLM streaming, collects all stream frames, converts them to response messages, and updates the prompt with the results.
A node that appends a user message to the LLM prompt and requests structured data from the LLM with optional error correction capabilities.
A node that that appends a user message to the LLM prompt and forces the LLM to use a specific tool.
A node that appends a user message to the LLM prompt and forces the LLM to use a specific tool.
A node that appends a user message to the LLM prompt and gets a response where the LLM can only call tools.
A node that adds multiple tool results to the prompt and gets multiple LLM responses.
A node that adds a tool result to the prompt and requests an LLM response.
Creates a node that sets up a structured output for an AI agent subgraph.
A node that adds messages to the LLM prompt using the provided prompt builder. The input is passed as it is to the output.
Creates an edge that filters assistant messages based on a custom condition and extracts their content.
Creates an edge that filters assistant messages based on a custom condition and provides access to media content.
Defines a handler to process failure cases in a directed edge strategy by applying a condition to filter intermediate results of type SafeTool.Result.Failure
. This method is used to specialize processing for failure results and to propagate or transform them based on the provided condition.
Creates an edge that filters outputs based on their type.
Creates an edge that filters assistant messages based on a custom condition and extracts their content.
Creates an edge that filters lists of tool call messages based on a custom condition.
Creates an edge that filters lists of tool result messages based on a custom condition.
Filters and transforms the intermediate outputs of the AI agent node based on the success results of a tool operation.
Creates an edge that filters tool call messages for a specific tool.
Creates an edge that filters tool call messages based on a custom condition.
Creates an edge that filters tool call messages for a specific tool and arguments condition.
Creates an edge that filters tool call messages to NOT be a specific tool
Creates an edge that filters tool result messages for a specific tool and result condition.
Rewrites LLM message history, leaving only user message and resulting TLDR.
Sets the ai.koog.prompt.params.LLMParams.ToolChoice for this LLM session.
Set the ai.koog.prompt.params.LLMParams.ToolChoice to ai.koog.prompt.params.LLMParams.ToolChoice.Auto to make LLM automatically decide between calling tools and generating text
Set the ai.koog.prompt.params.LLMParams.ToolChoice to ai.koog.prompt.params.LLMParams.ToolChoice.None to make LLM call one specific tool toolName
Set the ai.koog.prompt.params.LLMParams.ToolChoice to ai.koog.prompt.params.LLMParams.ToolChoice.None to make LLM never call tools
Set the ai.koog.prompt.params.LLMParams.ToolChoice to ai.koog.prompt.params.LLMParams.ToolChoice.Required to make LLM always call tools
Unset the ai.koog.prompt.params.LLMParams.ToolChoice. Mostly, if left unspecified, the default value of this parameter is ai.koog.prompt.params.LLMParams.ToolChoice.Auto