Package-level declarations
Types
Represents an abstract strategy for compressing the history of messages in a AIAgentLLMWriteSession. Different implementations define specific approaches to reducing the context size while maintaining key information.
Represents a message that has undergone moderation and the result of the moderation.
Functions
Casts the current instance of a Message.Response to a Message.Assistant. This function should only be used when it is guaranteed that the instance is of type Message.Assistant, as it will throw an exception if the type does not match.
Attempts to cast a Message.Response instance to a Message.Assistant type.
Clears the history of messages in the current AI Agent LLM Write Session.
Compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.
Checks if the list of Message.Response contains any instances of Message.Tool.Call.
Removes the last n messages from the current prompt in the write session.
Drops all trailing tool call messages from the current prompt
Executes multiple tool calls and returns their results. These calls can optionally be executed in parallel.
Calls a specific tool directly using the provided arguments.
Executes a tool call and returns the result.
Extracts a list of tool call messages from a given list of response messages.
Retrieves the latest token usage from the prompt within the LLM session.
Keeps only the last N messages in the session's prompt by removing all earlier messages.
Removes all messages from the current session's prompt that have a timestamp earlier than the specified timestamp.
A node that adds messages to the LLM prompt using the provided prompt builder. The input is passed as it is to the output.
A pass-through node that does nothing and returns input as output
A node that executes multiple tool calls. These calls can optionally be executed in parallel.
Creates a node in the AI agent subgraph that processes a collection of tool calls, executes them, and sends back the results to the downstream process. The tools can be executed either in parallel or sequentially based on the provided configuration.
A node that calls a specific tool directly using the provided arguments.
A node that executes a tool call and returns its result.
A node that compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.
A node that moderates only a single input message using a specified language model.
A node that appends a user message to the LLM prompt and gets a response with optional tool usage.
A node that appends a user message to the LLM prompt and gets multiple LLM responses with tool calls enabled.
A node that appends a user message to the LLM prompt and streams LLM response without transformation.
A node that appends a user message to the LLM prompt, streams LLM response and transforms the stream data.
A node that performs LLM streaming, collects all stream frames, converts them to response messages, and updates the prompt with the results.
A node that appends a user message to the LLM prompt and requests structured data from the LLM with optional error correction capabilities.
A node that that appends a user message to the LLM prompt and forces the LLM to use a specific tool.
A node that appends a user message to the LLM prompt and forces the LLM to use a specific tool.
A node that appends a user message to the LLM prompt and gets a response where the LLM can only call tools.
A node that adds multiple tool results to the prompt and gets multiple LLM responses.
A node that adds a tool result to the prompt and requests an LLM response.
Creates a node that sets up a structured output for an AI agent subgraph.
A node that adds messages to the LLM prompt using the provided prompt builder. The input is passed as it is to the output.
Creates an edge that filters assistant messages based on a custom condition and extracts their content.
Executes the provided action if the given response is of type Message.Assistant.
Creates an edge that filters assistant messages based on a custom condition and provides access to media content.
Defines a handler to process failure cases in a directed edge strategy by applying a condition to filter intermediate results of type SafeTool.Result.Failure. This method is used to specialize processing for failure results and to propagate or transform them based on the provided condition.
Creates an edge that filters outputs based on their type.
Creates an edge that filters assistant messages based on a custom condition and extracts their content.
Filters the provided list of response messages to include only assistant messages and, if the filtered list is not empty, performs the specified action with the filtered list.
Creates an edge that filters lists of tool call messages based on a custom condition.
Invokes the provided action when multiple tool call messages are found within a given list of response messages. Filters the list of responses to include only instances of Message.Tool.Call and executes the action on the filtered list if it is not empty.
Creates an edge that filters lists of tool result messages based on a custom condition.
Filters and transforms the intermediate outputs of the AI agent node based on the success results of a tool operation.
Creates an edge that filters tool call messages for a specific tool.
Creates an edge that filters tool call messages based on a custom condition.
Creates an edge that filters tool call messages for a specific tool and arguments condition.
Creates an edge that filters tool call messages to NOT be a specific tool
Creates an edge that filters tool result messages for a specific tool and result condition.
Rewrites LLM message history, leaving only user message and resulting TLDR.
Sends a message to a Large Language Model (LLM) and optionally allows the use of tools during the LLM interaction. The message becomes part of the current prompt, and the LLM's response is processed accordingly, either with or without tool integrations based on the provided parameters.
Sends a message to a Large Language Model (LLM) and forces it to use a specific tool. The message becomes part of the current prompt, and the LLM is instructed to use only the specified tool.
Sends a message to a Large Language Model (LLM) and gets multiple LLM responses with tool calls enabled. The message becomes part of the current prompt, and multiple responses from the LLM are collected.
Sends a message to a Large Language Model (LLM) that will only call tools without generating text responses. The message becomes part of the current prompt, and the LLM is instructed to only use tools.
Sends a message to a Large Language Model (LLM) and streams the LLM response. The message becomes part of the current prompt, and the LLM's response is streamed as it's generated.
Sends a message to a Large Language Model (LLM) and requests structured data from the LLM with error correction capabilities. The message becomes part of the current prompt, and the LLM's response is processed to extract structured data.
Adds multiple tool results to the prompt and gets multiple LLM responses.
Adds a tool result to the prompt and requests an LLM response.
Sets the ai.koog.prompt.params.LLMParams.ToolChoice for this LLM session.
Set the ai.koog.prompt.params.LLMParams.ToolChoice to ai.koog.prompt.params.LLMParams.ToolChoice.Auto to make LLM automatically decide between calling tools and generating text
Set the ai.koog.prompt.params.LLMParams.ToolChoice to ai.koog.prompt.params.LLMParams.ToolChoice.None to make LLM call one specific tool toolName
Set the ai.koog.prompt.params.LLMParams.ToolChoice to ai.koog.prompt.params.LLMParams.ToolChoice.None to make LLM never call tools
Set the ai.koog.prompt.params.LLMParams.ToolChoice to ai.koog.prompt.params.LLMParams.ToolChoice.Required to make LLM always call tools
Unset the ai.koog.prompt.params.LLMParams.ToolChoice. Mostly, if left unspecified, the default value of this parameter is ai.koog.prompt.params.LLMParams.ToolChoice.Auto