LLMBasedToolCallFixProcessor

class LLMBasedToolCallFixProcessor(toolRegistry: ToolRegistry, toolCallJsonConfig: ToolCallJsonConfig = ToolCallJsonConfig(), preprocessor: ResponseProcessor = ManualToolCallFixProcessor(toolRegistry, toolCallJsonConfig), fallbackProcessor: ResponseProcessor? = null, assessToolCallIntentSystemMessage: String = Prompts.assessToolCallIntent, fixToolCallSystemMessage: String = Prompts.fixToolCall, invalidJsonFeedback: (List<ToolDescriptor>) -> String = Prompts::invalidJsonFeedback, invalidNameFeedback: (String, List<ToolDescriptor>) -> String = Prompts::invalidNameFeedback, invalidArgumentsFeedback: (String, ToolDescriptor) -> String = Prompts::invalidArgumentsFeedback, maxRetries: Int = 3) : ToolJsonFixProcessor(source)

A response processor that fixes incorrectly communicated tool calls.

Applies an LLM-based approach to fix incorrectly generated tool calls. Iteratively asks the LLM to update a message until it is a correct tool call.

The first step is to identify if the corrections are needed. It is done by (a) Asking the LLM if the message intends to call a tool if the message is Message.Assistant (b) Trying to parse the name and parameters if the message is Message.Tool.Call

The main step is to fix the message (if needed). The processor runs a loop asking the LLM to fix the message. On every iteration, the processor provides the LLM with the current message and the feedback on it. If the LLM fails to return a correct tool call message in maxRetries iterations, the fallback processor is used. If no fallback processor is provided, the original message is returned.

Some use-cases:

  1. Simple usage:

val processor = LLMBasedToolCallFixProcessor(toolRegistry) // Tool registry is required
  1. Customizing the json keys:

val processor = LLMBasedToolCallFixProcessor(
toolRegistry,
ToolCallJsonConfig(
idJsonKeys = ToolCallJsonConfig.defaultIdJsonKeys + listOf("custom_id_keys", ...),
nameJsonKeys = ToolCallJsonConfig.defaultNameJsonKeys + listOf("custom_name_keys", ...),
argsJsonKeys = ToolCallJsonConfig.defaultArgsJsonKeys + listOf("custom_args_keys", ...),,
), // Add custom json keys produced by your LLM
)
  1. Using a fallback processor. Here the fallback processor calls another (e.g. better but more expensive) LLM to fix the message:

val betterModel = OpenAIModels.Chat.GPT4o
val fallbackProcessor = object : ResponseProcessor() {
override suspend fun process(
executor: PromptExecutor,
prompt: Prompt,
model: LLModel,
tools: List<ToolDescriptor>,
responses: List<Message.Response>
): List<Message.Response> {
val promptFixing = prompt(prompt) {
user("please fix the following incorrectly generated tool call messages: $responses")
}
return executor.execute(promptFixing, betterModel, tools) // use a better LLM
}
}

val processor = LLMBasedToolCallFixProcessor(
toolRegistry,
fallbackProcessor = fallbackProcessor
)

Parameters

toolRegistry

The tool registry with available tools

toolCallJsonConfig

Configuration for parsing and fixing tool call json

preprocessor

A processor applied to all responses from the LLM. Defaults to ManualToolCallFixProcessor

assessToolCallIntentSystemMessage

The system message to ask LLM if a tool call was intended

fixToolCallSystemMessage

The system message to ask LLM to fix a tool call

invalidJsonFeedback

The message sent to the LLM when tool call json is invalid

invalidNameFeedback

The message sent to the LLM when the tool name is invalid

invalidArgumentsFeedback

The message sent to the LLM when tool arguments are invalid

fallbackProcessor

The fallback processor to use if LLM fails to fix a tool call. Defaults to null, meaning that the original message is returned if the LLM fails to fix a tool call.

maxRetries

The maximum number of iterations in the main loop

Constructors

Link copied to clipboard
constructor(toolRegistry: ToolRegistry, toolCallJsonConfig: ToolCallJsonConfig = ToolCallJsonConfig(), preprocessor: ResponseProcessor = ManualToolCallFixProcessor(toolRegistry, toolCallJsonConfig), fallbackProcessor: ResponseProcessor? = null, assessToolCallIntentSystemMessage: String = Prompts.assessToolCallIntent, fixToolCallSystemMessage: String = Prompts.fixToolCall, invalidJsonFeedback: (List<ToolDescriptor>) -> String = Prompts::invalidJsonFeedback, invalidNameFeedback: (String, List<ToolDescriptor>) -> String = Prompts::invalidNameFeedback, invalidArgumentsFeedback: (String, ToolDescriptor) -> String = Prompts::invalidArgumentsFeedback, maxRetries: Int = 3)

Functions

Link copied to clipboard

Chains two processors together.

Link copied to clipboard
open suspend override fun process(executor: PromptExecutor, prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>, responses: List<Message.Response>): List<Message.Response>

Processes the given LLM responses. These responses were received using executor, prompt, model, tools.

suspend fun process(executor: PromptExecutor, prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>, response: Message.Response): Message.Response

Processes a single LLM response.