LLMBasedToolCallFixProcessor
A response processor that fixes incorrectly communicated tool calls.
Applies an LLM-based approach to fix incorrectly generated tool calls. Iteratively asks the LLM to update a message until it is a correct tool call.
The first step is to identify if the corrections are needed. It is done by (a) Asking the LLM if the message intends to call a tool if the message is Message.Assistant (b) Trying to parse the name and parameters if the message is Message.Tool.Call
The main step is to fix the message (if needed). The processor runs a loop asking the LLM to fix the message. On every iteration, the processor provides the LLM with the current message and the feedback on it. If the LLM fails to return a correct tool call message in maxRetries iterations, the fallback processor is used. If no fallback processor is provided, the original message is returned.
Some use-cases:
Simple usage:
val processor = LLMBasedToolCallFixProcessor(toolRegistry) // Tool registry is requiredCustomizing the json keys:
val processor = LLMBasedToolCallFixProcessor(
toolRegistry,
ToolCallJsonConfig(
idJsonKeys = ToolCallJsonConfig.defaultIdJsonKeys + listOf("custom_id_keys", ...),
nameJsonKeys = ToolCallJsonConfig.defaultNameJsonKeys + listOf("custom_name_keys", ...),
argsJsonKeys = ToolCallJsonConfig.defaultArgsJsonKeys + listOf("custom_args_keys", ...),,
), // Add custom json keys produced by your LLM
)Using a fallback processor. Here the fallback processor calls another (e.g. better but more expensive) LLM to fix the message:
val betterModel = OpenAIModels.Chat.GPT4o
val fallbackProcessor = object : ResponseProcessor() {
override suspend fun process(
executor: PromptExecutor,
prompt: Prompt,
model: LLModel,
tools: List<ToolDescriptor>,
responses: List<Message.Response>
): List<Message.Response> {
val promptFixing = prompt(prompt) {
user("please fix the following incorrectly generated tool call messages: $responses")
}
return executor.execute(promptFixing, betterModel, tools) // use a better LLM
}
}
val processor = LLMBasedToolCallFixProcessor(
toolRegistry,
fallbackProcessor = fallbackProcessor
)Parameters
The tool registry with available tools
Configuration for parsing and fixing tool call json
A processor applied to all responses from the LLM. Defaults to ManualToolCallFixProcessor
The system message to ask LLM if a tool call was intended
The system message to ask LLM to fix a tool call
The message sent to the LLM when tool call json is invalid
The message sent to the LLM when the tool name is invalid
The message sent to the LLM when tool arguments are invalid
The fallback processor to use if LLM fails to fix a tool call. Defaults to null, meaning that the original message is returned if the LLM fails to fix a tool call.
The maximum number of iterations in the main loop
Constructors
Functions
Chains two processors together.
Processes the given LLM responses. These responses were received using executor, prompt, model, tools.
Processes a single LLM response.