AIAgentFunctionalContextBaseAPI
API for the AIAgentFunctionalContextBase
Inheritors
Properties
Represents the input provided to the agent's execution.
Represents the configuration for an AI agent.
Represents the environment in which the agent operates.
Represents the observability data associated with the AI Agent context.
Represents the AI agent's LLM context, providing mechanisms for managing tools, prompts, and interaction with the execution environment. It ensures thread safety during concurrent read and write operations through the use of sessions.
Represents the parent context of the AI Agent.
Manages and tracks the state of aт AI agent within the context of its execution.
Concurrent-safe key-value storage for an agent, used to manage and persist data within the context of the AI agent stage execution. The storage property provides a thread-safe mechanism for sharing and storing data specific to the agent's operation.
Represents the name of the strategy being used in the current AI agent context.
Functions
Utility function to get AIAgentContext.agentInput and try to cast it to some expected type.
Casts the current instance of a Message.Response to a Message.Assistant. This function should only be used when it is guaranteed that the instance is of type Message.Assistant, as it will throw an exception if the type does not match.
Attempts to cast a Message.Response instance to a Message.Assistant type.
Compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.
Checks if the list of Message.Response contains any instances of Message.Tool.Call.
Extension function to access the Debugger feature from an agent context.
Executes multiple tool calls and returns their results. These calls can optionally be executed in parallel.
Calls a specific tool directly using the provided arguments.
Executes a tool call and returns the result.
Extracts a list of tool call messages from a given list of response messages.
Retrieves a feature from the AIAgentContext.pipeline associated with this context using the specified key.
Retrieves a feature from the AIAgentContext.pipeline associated with this context using the specified key or throws an exception if it is not available.
Retrieves data from the agent's storage using the specified key.
Retrieves the agent-specific context data associated with the current instance.
Retrieves the history of messages exchanged during the agent's execution.
Retrieves the latest token usage from the prompt within the LLM session.
Executes the provided action if the given response is of type Message.Assistant.
Filters the provided list of response messages to include only assistant messages and, if the filtered list is not empty, performs the specified action with the filtered list.
Invokes the provided action when multiple tool call messages are found within a given list of response messages. Filters the list of responses to include only instances of Message.Tool.Call and executes the action on the filtered list if it is not empty.
Removes a feature or data associated with the specified key from the agent's storage.
Removes the agent-specific context data associated with the current context.
Sends a message to a Large Language Model (LLM) and optionally allows the use of tools during the LLM interaction. The message becomes part of the current prompt, and the LLM's response is processed accordingly, either with or without tool integrations based on the provided parameters.
Sends a message to a Large Language Model (LLM) and forces it to use a specific tool. The message becomes part of the current prompt, and the LLM is instructed to use only the specified tool.
Sends a message to a Large Language Model (LLM) and gets multiple LLM responses with tool calls enabled. The message becomes part of the current prompt, and multiple responses from the LLM are collected.
Sends a message to a Large Language Model (LLM) that will only call tools without generating text responses. The message becomes part of the current prompt, and the LLM is instructed to only use tools.
Sends a message to a Large Language Model (LLM) and streams the LLM response. The message becomes part of the current prompt, and the LLM's response is streamed as it's generated.
Provides the root context of the current agent. If the root context is not defined, this function defaults to returning the current instance.
Adds multiple tool results to the prompt and gets multiple LLM responses.
Adds a tool result to the prompt and requests an LLM response.
Stores a feature in the agent's storage using the specified key.
Stores the given agent context data within the current AI agent context.
Executes a subtask within the AI agent's functional context. This method enables the use of tools to achieve a specific task based on the input provided. It defines the task using an inline function, employs tools iteratively, and attempts to complete the subtask with a designated finishing tool.
Executes a subtask within the larger context of an AI agent's functional operation. This method allows you to define a specific task to be performed, using the given input, tools, and optional configuration parameters.
Executes a subtask with validation and verification of the results. The method defines a subtask for the AI agent using the provided input and additional parameters and ensures that the output is evaluated based on its correctness and feedback.
Executes a block of code with a modified execution context.
Executes a block of code with a modified execution context, creating a parent-child relationship between execution contexts for tracing purposes.