PromptExecutor

An interface representing an executor for processing language model prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.

Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.

Inheritors

Functions

Link copied to clipboard
Link copied to clipboard
abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a given prompt using the specified language model and tools, returning a list of responses from the model.

Link copied to clipboard
open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.

Link copied to clipboard
abstract fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): Flow<StreamFrame>

Executes a given prompt using the specified language model and returns a stream of output as a flow of StreamFrame objects.

Link copied to clipboard
abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Moderates the content of a given message with attachments using a specified language model.