Package-level declarations

Types

Link copied to clipboard

A concrete implementation of the HistoryCompressionStrategy that splits the session's prompt into chunks of a predefined size and generates summaries (TL;DR) for each chunk.

A strategy for compressing history by retaining only the last n messages in a session.

Link copied to clipboard

A strategy for compressing message histories using a specified timestamp as a reference point. This strategy removes messages that occurred before a given timestamp and creates a summarized context for further interactions.

Link copied to clipboard

Represents an abstract strategy for compressing the history of messages in a AIAgentLLMWriteSession. Different implementations define specific approaches to reducing the context size while maintaining key information.

Link copied to clipboard
data class ModeratedMessage(val message: Message, val moderationResult: ModerationResult)

Represents a message that has undergone moderation and the result of the moderation.

WholeCompressionStrategyWithMultipleSystemMessages is a concrete implementation of the HistoryCompressionStrategy that handles scenarios where the conversation history contains multiple system messages.

Link copied to clipboard

WholeHistory is a concrete implementation of the HistoryCompressionStrategy that encapsulates the logic for compressing entire conversation history into a succinct summary (TL;DR) and composing the necessary messages to create a streamlined prompt suitable for language model interactions.

Functions

Link copied to clipboard
suspend fun <T> AIAgentGraphContextBase.appendPromptImpl(input: T, body: PromptBuilder.() -> Unit): T

InternalAgentsApi method. Appends a prompt to the current LLM session.

Link copied to clipboard

InternalAgentsApi method. Executes a single tool with the provided arguments and returns the result.

Link copied to clipboard
suspend fun <T> AIAgentGraphContextBase.llmCompressHistoryImpl(input: T, retrievalModel: LLModel?, strategy: HistoryCompressionStrategy, preserveMemory: Boolean): T

InternalAgentsApi method. Performs LLM history compression.

Link copied to clipboard
inline fun <T> nodeAppendPrompt(name: String? = null, noinline body: PromptBuilder.() -> Unit): AIAgentNodeDelegate<T, T>

A node that adds messages to the LLM prompt using the provided prompt builder. The input is passed as it is to the output.

Link copied to clipboard
inline fun <T> nodeDoNothing(name: String? = null): AIAgentNodeDelegate<T, T>

A pass-through node that does nothing and returns input as output

Link copied to clipboard

A node that executes multiple tool calls. These calls can optionally be executed in parallel.

Link copied to clipboard

Creates a node in the AI agent subgraph that processes a collection of tool calls, executes them, and sends back the results to the downstream process. The tools can be executed either in parallel or sequentially based on the provided configuration.

Link copied to clipboard
inline fun <ToolArg, TResult> nodeExecuteSingleTool(name: String? = null, tool: Tool<ToolArg, TResult>, doAppendPrompt: Boolean = true): AIAgentNodeDelegate<ToolArg, SafeTool.Result<TResult>>

A node that calls a specific tool directly using the provided arguments.

Link copied to clipboard

A node that executes a tool call and returns its result.

Link copied to clipboard
inline fun <T> nodeLLMCompressHistory(name: String? = null, strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, retrievalModel: LLModel? = null, preserveMemory: Boolean = true): AIAgentNodeDelegate<T, T>

A node that compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.

Link copied to clipboard
fun nodeLLMModerateMessage(name: String? = null, moderatingModel: LLModel? = null, includeCurrentPrompt: Boolean = false): AIAgentNodeDelegate<Message, ModeratedMessage>

A node that moderates only a single input message using a specified language model.

Link copied to clipboard
fun nodeLLMRequest(name: String? = null, allowToolCalls: Boolean = true): AIAgentNodeDelegate<String, Message.Response>

A node that appends a user message to the LLM prompt and gets a response with optional tool usage.

Link copied to clipboard

A node that that appends a user message to the LLM prompt and forces the LLM to use a specific tool.

A node that appends a user message to the LLM prompt and forces the LLM to use a specific tool.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets multiple LLM responses with tool calls enabled.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets multiple LLM responses where the LLM can only call tools.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets a response where the LLM can only call tools.

Link copied to clipboard
fun nodeLLMRequestStreaming(name: String? = null, structureDefinition: StructureDefinition? = null): AIAgentNodeDelegate<String, Flow<StreamFrame>>

A node that appends a user message to the LLM prompt and streams LLM response without transformation.

fun <T> nodeLLMRequestStreaming(name: String? = null, structureDefinition: StructureDefinition? = null, transformStreamData: suspend (Flow<StreamFrame>) -> Flow<T>): AIAgentNodeDelegate<String, Flow<T>>

A node that appends a user message to the LLM prompt, streams LLM response and transforms the stream data.

Link copied to clipboard
inline fun <T> nodeLLMRequestStreamingAndSendResults(name: String? = null, structureDefinition: StructureDefinition? = null): AIAgentNodeDelegate<T, List<Message.Response>>

A node that performs LLM streaming, collects all stream frames, converts them to response messages, and updates the prompt with the results.

Link copied to clipboard
inline fun <T> nodeLLMRequestStructured(name: String? = null, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): AIAgentNodeDelegate<String, Result<StructuredResponse<T>>>

A node that appends a user message to the LLM prompt and requests structured data from the LLM with optional error correction capabilities.

Link copied to clipboard

A node that that appends a user message to the LLM prompt and forces the LLM to use a specific tool.

A node that appends a user message to the LLM prompt and forces the LLM to use a specific tool.

Link copied to clipboard

A node that appends a user message to the LLM prompt and gets a response where the LLM can only call tools.

Link copied to clipboard

A node that adds multiple tool results to the prompt and gets multiple LLM responses.

A node that adds multiple tool results to the prompt and gets multiple LLM responses where the LLM can only call tools.

Link copied to clipboard

A node that adds a tool result to the prompt and requests an LLM response.

Link copied to clipboard

A node that adds a tool result to the prompt and gets an LLM response where the LLM can only call tools.

Link copied to clipboard

Creates a node that sets up a structured output for an AI agent subgraph.

Link copied to clipboard
inline fun <T> nodeUpdatePrompt(name: String? = null, noinline body: PromptBuilder.() -> Unit): AIAgentNodeDelegate<T, T>

A node that adds messages to the LLM prompt using the provided prompt builder. The input is passed as it is to the output.

Link copied to clipboard

Creates an edge that filters assistant messages based on a custom condition and extracts their content.

Link copied to clipboard

Creates an edge that filters assistant messages based on a custom condition and provides access to media content.

Link copied to clipboard

Defines a handler to process failure cases in a directed edge strategy by applying a condition to filter intermediate results of type SafeTool.Result.Failure. This method is used to specialize processing for failure results and to propagate or transform them based on the provided condition.

Link copied to clipboard

Creates an edge that filters assistant messages based on a custom condition and extracts their content.

Link copied to clipboard
Link copied to clipboard

Filters and transforms the intermediate outputs of the AI agent node based on the success results of a tool operation.

Link copied to clipboard

Creates an edge that filters tool result messages for a specific tool and result condition.

Link copied to clipboard

InternalAgentsApi method. Performs LLM streaming and sends the results to the prompt.

Link copied to clipboard
suspend fun <T> AIAgentGraphContextBase.requestStreamingImpl(message: String, structureDefinition: StructureDefinition?, transformStreamData: suspend (Flow<StreamFrame>) -> Flow<T>): Flow<T>

InternalAgentsApi method. Performs LLM streaming and transforms the stream data.

Link copied to clipboard

InternalAgentsApi method. Sets up structured output for an AI agent subgraph.