AIAgentFunctionalContext
Represents the context for an AI agent of FunctionalAIAgent type, serving as the execution environment and state holder while an agent operates within a predefined pipeline. It extends AIAgentFunctionalContextBase and is designed to allow configuration, state management, and storage for an agent's functional operations.
Parameters
The AIAgentEnvironment in which the AI agent operates, facilitating interaction with the external environment for tool execution and error reporting.
A unique identifier for the agent, used to distinguish it from other agents.
An identifier representing the execution run of the agent, useful for tracking and managing runs.
The input data provided to the agent, which can guide its execution or decision-making process.
The AIAgentConfig object containing configuration information for the agent, such as behavior settings.
The AIAgentLLMContext providing access to the large language model interactions for generating outputs.
The AIAgentStateManager responsible for managing and persisting the state of the agent during its lifecycle.
The AIAgentStorage interface facilitating storage and retrieval of data in the agent's environment.
The name of the strategic approach or plan under which the agent is functioning.
The AIAgentFunctionalPipeline defining the functional execution flow of the agent's operations.
The AgentExecutionInfo containing metadata and runtime information about the agent's current execution.
An optional reference to the parent AIAgentContext, enabling hierarchical context structure if needed.
Constructors
Properties
Represents the input provided to the agent's execution.
Represents the configuration for an AI agent.
Represents the environment in which the agent operates.
Represents the observability data associated with the AI Agent context.
Represents the AI agent's LLM context, providing mechanisms for managing tools, prompts, and interaction with the execution environment. It ensures thread safety during concurrent read and write operations through the use of sessions.
Represents the parent context of the AI Agent.
Represents the pipeline associated with the AI agent.
Manages and tracks the state of aт AI agent within the context of its execution.
Concurrent-safe key-value storage for an agent, used to manage and persist data within the context of the AI agent stage execution. The storage property provides a thread-safe mechanism for sharing and storing data specific to the agent's operation.
Represents the name of the strategy being used in the current AI agent context.
Functions
Utility function to get AIAgentContext.agentInput and try to cast it to some expected type.
Casts the current instance of a Message.Response to a Message.Assistant. This function should only be used when it is guaranteed that the instance is of type Message.Assistant, as it will throw an exception if the type does not match.
Attempts to cast a Message.Response instance to a Message.Assistant type.
Compresses the current LLM prompt (message history) into a summary, replacing messages with a TLDR.
Checks if the list of Message.Response contains any instances of Message.Tool.Call.
Creates a copy of the current AIAgentFunctionalContext, allowing for selective overriding of its properties. This method is particularly useful for creating modified contexts during agent execution without mutating the original context - perfect for when you need to experiment with different configurations or pass tweaked contexts down the execution pipeline while keeping the original pristine!
Extension function to access the Debugger feature from an agent context.
Executes multiple tool calls and returns their results. These calls can optionally be executed in parallel.
Calls a specific tool directly using the provided arguments.
Executes a tool call and returns the result.
Extracts a list of tool call messages from a given list of response messages.
Retrieves a feature from the AIAgentContext.pipeline associated with this context using the specified key.
Retrieves a feature from the AIAgentContext.pipeline associated with this context using the specified key or throws an exception if it is not available.
Retrieves data from the agent's storage using the specified key.
Retrieves the agent-specific context data associated with the current instance.
Retrieves the history of messages exchanged during the agent's execution.
Retrieves the latest token usage from the prompt within the LLM session.
Executes the provided action if the given response is of type Message.Assistant.
Filters the provided list of response messages to include only assistant messages and, if the filtered list is not empty, performs the specified action with the filtered list.
Invokes the provided action when multiple tool call messages are found within a given list of response messages. Filters the list of responses to include only instances of Message.Tool.Call and executes the action on the filtered list if it is not empty.
Removes a feature or data associated with the specified key from the agent's storage.
Removes the agent-specific context data associated with the current context.
Sends a message to a Large Language Model (LLM) and optionally allows the use of tools during the LLM interaction. The message becomes part of the current prompt, and the LLM's response is processed accordingly, either with or without tool integrations based on the provided parameters.
Sends a message to a Large Language Model (LLM) and forces it to use a specific tool. The message becomes part of the current prompt, and the LLM is instructed to use only the specified tool.
Sends a message to a Large Language Model (LLM) and gets multiple LLM responses with tool calls enabled. The message becomes part of the current prompt, and multiple responses from the LLM are collected.
Sends a message to a Large Language Model (LLM) that will only call tools without generating text responses. The message becomes part of the current prompt, and the LLM is instructed to only use tools.
Sends a message to a Large Language Model (LLM) and streams the LLM response. The message becomes part of the current prompt, and the LLM's response is streamed as it's generated.
Sends a structured request to the Large Language Model (LLM) and processes the response.
Provides the root context of the current agent. If the root context is not defined, this function defaults to returning the current instance.
Adds multiple tool results to the prompt and gets multiple LLM responses.
Adds a tool result to the prompt and requests an LLM response.
Stores a feature in the agent's storage using the specified key.
Stores the given agent context data within the current AI agent context.
Executes a subtask within the larger context of an AI agent's functional operation. This method allows defining a specific task to be performed with the given input, tools, and optional configuration.
Executes a subtask within the AI agent's functional context. This method enables the use of tools to achieve a specific task based on the input provided.
Executes a subtask with validation and verification of the results. The method defines a subtask for the AI agent using the provided input and additional parameters and ensures that the output is evaluated based on its correctness and feedback.
Executes a block of code with a modified execution context.
Executes a block of code with a modified execution context, creating a parent-child relationship between execution contexts for tracing purposes.