AIAgentFunctionalContext
Represents the execution context for an AI agent operating in a loop. It provides access to critical parts such as the environment, configuration, large language model (LLM) context, state management, and storage. Additionally, it enables the agent to store, retrieve, and manage context-specific data during its execution lifecycle.
Constructors
Properties
Represents the AIAgent holding the current AIAgentContext.
The input data passed to the agent, which can be of any type, depending on the agent's context.
The configuration settings for the agent, including its prompt and model details, as well as operational constraints like iteration limits.
The environment interface allowing the agent to interact with the external world, including executing tools and reporting problems.
The context for interacting with the large language model used by the agent, enabling message history retrieval and processing.
Represents the parent context of the AI Agent.
The state management component responsible for tracking and updating the agent's state during its execution.
A storage interface providing persistent storage capabilities for the agent's data.
The name of the agent's strategic approach or operational method, determining its behavior during execution.
Functions
Utility function to get AIAgentContext.agentInput and try to cast it to some expected type.
Compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.
Creates a copy of the current AIAgentFunctionalContext, allowing for selective overriding of its properties. This method is particularly useful for creating modified contexts during agent execution without mutating the original context - perfect for when you need to experiment with different configurations or pass tweaked contexts down the execution pipeline while keeping the original pristine!
Executes multiple tool calls and returns their results. These calls can optionally be executed in parallel.
Calls a specific tool directly using the provided arguments.
Executes a tool call and returns the result.
Extracts a list of tool call messages from a given list of response messages.
Retrieves a feature from the current context using the specified key.
Retrieves a feature of the specified type from the current context.
Retrieves a feature of the specified type from the context or throws an exception if it is not available.
Retrieves data from the agent's storage using the specified key.
Retrieves the agent-specific context data associated with the current instance.
Retrieves the history of messages exchanged during the agent's execution.
Retrieves the latest token usage from the prompt within the LLM session.
Executes the provided action if the given response is of type Message.Assistant.
Filters the provided list of response messages to include only assistant messages and, if the filtered list is not empty, performs the specified action with the filtered list.
Invokes the provided action when multiple tool call messages are found within a given list of response messages. Filters the list of responses to include only instances of Message.Tool.Call
and executes the action on the filtered list if it is not empty.
Removes a feature or data associated with the specified key from the agent's storage.
Removes the agent-specific context data associated with the current context.
Sends a message to a Large Language Model (LLM) and optionally allows the use of tools during the LLM interaction. The message becomes part of the current prompt, and the LLM's response is processed accordingly, either with or without tool integrations based on the provided parameters.
Sends a message to a Large Language Model (LLM) and forces it to use a specific tool. The message becomes part of the current prompt, and the LLM is instructed to use only the specified tool.
Sends a message to a Large Language Model (LLM) and gets multiple LLM responses with tool calls enabled. The message becomes part of the current prompt, and multiple responses from the LLM are collected.
Sends a message to a Large Language Model (LLM) that will only call tools without generating text responses. The message becomes part of the current prompt, and the LLM is instructed to only use tools.
Sends a message to a Large Language Model (LLM) and streams the LLM response. The message becomes part of the current prompt, and the LLM's response is streamed as it's generated.
Sends a message to a Large Language Model (LLM) and requests structured data from the LLM with error correction capabilities. The message becomes part of the current prompt, and the LLM's response is processed to extract structured data.
Provides the root context of the current agent. If the root context is not defined, this function defaults to returning the current instance.
Adds multiple tool results to the prompt and gets multiple LLM responses.
Adds a tool result to the prompt and requests an LLM response.
Stores a feature in the agent's storage using the specified key.
Stores the given agent context data within the current AI agent context.