PromptExecutorAPI
Functions
Link copied to clipboard
abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>
Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.
Link copied to clipboard
open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>
Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.
Link copied to clipboard
abstract fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): Flow<StreamFrame>
Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.
Link copied to clipboard
Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.
Link copied to clipboard
Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.
Link copied to clipboard
Moderates the content of a given message with attachments using a specified LLM.