Package-level declarations
Types
A concrete implementation of the HistoryCompressionStrategy that splits the session's prompt into chunks of a predefined size and generates summaries (TL;DR) for each chunk.
A strategy for compressing history by retaining only the last n messages in a session.
A strategy for compressing message histories using a specified timestamp as a reference point. This strategy removes messages that occurred before a given timestamp and creates a summarized context for further interactions.
Represents an abstract strategy for compressing the history of messages in a AIAgentLLMWriteSession. Different implementations define specific approaches to reducing the context size while maintaining key information.
Represents a message that has undergone moderation and the result of the moderation.
WholeCompressionStrategyWithMultipleSystemMessages is a concrete implementation of the HistoryCompressionStrategy that handles scenarios where the conversation history contains multiple system messages.
WholeHistory is a concrete implementation of the HistoryCompressionStrategy that encapsulates the logic for compressing entire conversation history into a succinct summary (TL;DR) and composing the necessary messages to create a streamlined prompt suitable for language model interactions.
Functions
InternalAgentsApi method. Appends a prompt to the current LLM session.
InternalAgentsApi method. Executes a single tool with the provided arguments and returns the result.
InternalAgentsApi method. Performs LLM history compression.
A node that adds messages to the LLM prompt using the provided prompt builder. The input is passed as it is to the output.
A pass-through node that does nothing and returns input as output
A node that executes multiple tool calls. These calls can optionally be executed in parallel.
Creates a node in the AI agent subgraph that processes a collection of tool calls, executes them, and sends back the results to the downstream process. The tools can be executed either in parallel or sequentially based on the provided configuration.
A node that calls a specific tool directly using the provided arguments.
A node that executes a tool call and returns its result.
A node that compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.
A node that moderates only a single input message using a specified language model.
A node that appends a user message to the LLM prompt and gets a response with optional tool usage.
A node that that appends a user message to the LLM prompt and forces the LLM to use a specific tool.
A node that appends a user message to the LLM prompt and forces the LLM to use a specific tool.
A node that appends a user message to the LLM prompt and gets multiple LLM responses with tool calls enabled.
A node that appends a user message to the LLM prompt and gets multiple LLM responses where the LLM can only call tools.
A node that appends a user message to the LLM prompt and gets a response where the LLM can only call tools.
A node that appends a user message to the LLM prompt and streams LLM response without transformation.
A node that appends a user message to the LLM prompt, streams LLM response and transforms the stream data.
A node that performs LLM streaming, collects all stream frames, converts them to response messages, and updates the prompt with the results.
A node that appends a user message to the LLM prompt and requests structured data from the LLM with optional error correction capabilities.
A node that that appends a user message to the LLM prompt and forces the LLM to use a specific tool.
A node that appends a user message to the LLM prompt and forces the LLM to use a specific tool.
A node that appends a user message to the LLM prompt and gets a response where the LLM can only call tools.
A node that adds multiple tool results to the prompt and gets multiple LLM responses.
A node that adds multiple tool results to the prompt and gets multiple LLM responses where the LLM can only call tools.
A node that adds a tool result to the prompt and requests an LLM response.
A node that adds a tool result to the prompt and gets an LLM response where the LLM can only call tools.
Creates a node that sets up a structured output for an AI agent subgraph.
A node that adds messages to the LLM prompt using the provided prompt builder. The input is passed as it is to the output.
Creates an edge that filters assistant messages based on a custom condition and extracts their content.
Creates an edge that filters assistant messages based on a custom condition and provides access to media content.
Defines a handler to process failure cases in a directed edge strategy by applying a condition to filter intermediate results of type SafeTool.Result.Failure. This method is used to specialize processing for failure results and to propagate or transform them based on the provided condition.
Creates an edge that filters outputs based on their type.
Creates an edge that filters assistant messages based on a custom condition and extracts their content.
Creates an edge that filters lists of reasoning messages based on a custom condition.
Creates an edge that filters lists of tool call messages based on a custom condition.
Creates an edge that filters lists of tool result messages based on a custom condition.
Creates an edge that filters a reasoning message based on a custom condition
Filters and transforms the intermediate outputs of the AI agent node based on the success results of a tool operation.
Creates an edge that filters tool call messages for a specific tool.
Creates an edge that filters tool call messages based on a custom condition.
Creates an edge that filters tool call messages for a specific tool and arguments condition.
Creates an edge that filters tool call messages to NOT be a specific tool
Creates an edge that filters tool result messages for a specific tool and result condition.
InternalAgentsApi method. Performs LLM streaming and sends the results to the prompt.
InternalAgentsApi method. Performs LLM streaming and transforms the stream data.
InternalAgentsApi method. Sets up structured output for an AI agent subgraph.