subtaskWithVerification

inline suspend fun <Input> AIAgentFunctionalContext.subtaskWithVerification(input: Input, tools: List<Tool<*, *>>? = null, llmModel: LLModel? = null, llmParams: LLMParams? = null, runMode: ToolCalls = ToolCalls.SEQUENTIAL, assistantResponseRepeatMax: Int? = null, defineTask: suspend AIAgentFunctionalContext.(input: Input) -> String): CriticResult<Input>(source)

Executes a subtask with validation and verification of the results. The method defines a subtask for the AI agent using the provided input and additional parameters and ensures that the output is evaluated based on its correctness and feedback.

Return

A CriticResult object containing the verification status, feedback, and the original input for the subtask.

Parameters

Input

The type of the input provided to the subtask.

input

The input data for the subtask, which will be used to create and execute the task.

tools

An optional list of tools that can be used during the execution of the subtask.

llmModel

An optional parameter specifying the LLM model to be used for the subtask.

llmParams

Optional configuration parameters for the LLM, such as temperature and token limits.

runMode

The mode in which tools should be executed, either sequentially or in parallel.

assistantResponseRepeatMax

An optional parameter specifying the maximum number of retries for getting valid responses from the assistant.

defineTask

A suspend function that defines the subtask as a string based on the provided input.