PromptExecutor

An interface representing an executor for processing language model prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.

Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.

Functions

Link copied to clipboard
abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<Message.Response>

Executes a given prompt using the specified language model and tools, returning a list of responses from the model.

Link copied to clipboard
suspend fun PromptExecutor.execute(prompt: Prompt, model: LLModel): Message.Response

Executes a given prompt using the specified language model and returns a single response.

Link copied to clipboard
open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.

Link copied to clipboard
abstract suspend fun executeStreaming(prompt: Prompt, model: LLModel): Flow<String>

Executes a given prompt using the specified language model and returns a stream of output as a flow of strings.

Link copied to clipboard
abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Moderates the content of a given message with attachments using a specified language model.