PromptExecutor

actual abstract class PromptExecutor : PromptExecutorAPI(source)

An interface representing an executor for processing LLM prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.

Implements AutoCloseable as prompt executors typically work with LLM clients. Always close it when finished.

Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.

actual abstract class PromptExecutor : PromptExecutorAPI(source)

An interface representing an executor for processing LLM prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.

Implements AutoCloseable as prompt executors typically work with LLM clients. Always close it when finished.

Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.

expect abstract class PromptExecutor : PromptExecutorAPI(source)

An interface representing an executor for processing LLM prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.

Implements AutoCloseable as prompt executors typically work with LLM clients. Always close it when finished.

Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.

Inheritors

actual abstract class PromptExecutor : PromptExecutorAPI(source)

An interface representing an executor for processing LLM prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.

Implements AutoCloseable as prompt executors typically work with LLM clients. Always close it when finished.

Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.

actual abstract class PromptExecutor : PromptExecutorAPI(source)

An interface representing an executor for processing LLM prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.

Implements AutoCloseable as prompt executors typically work with LLM clients. Always close it when finished.

Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.

actual abstract class PromptExecutor : PromptExecutorAPI(source)

An interface representing an executor for processing LLM prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.

Implements AutoCloseable as prompt executors typically work with LLM clients. Always close it when finished.

Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.

Constructors

Link copied to clipboard
actual constructor()
actual constructor()
expect constructor()
actual constructor()
actual constructor()
actual constructor()

Types

Link copied to clipboard
object Companion

Companion object for PromptExecutor.

Functions

close
Link copied to clipboard
abstract fun close()
abstract fun close()
expect abstract fun close()
abstract fun close()
abstract fun close()
abstract fun close()
Link copied to clipboard
abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.

abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.

abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.

abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.

fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList(), executorService: ExecutorService? = null): List<Message.Response>
abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.

abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.

Link copied to clipboard
open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.

open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.

open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.

open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.

fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList(), executorService: ExecutorService? = null): List<LLMChoice>
open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.

open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.

Link copied to clipboard
abstract fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): Flow<StreamFrame>

Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.

abstract fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): Flow<StreamFrame>

Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.

abstract fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): Flow<StreamFrame>

Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.

abstract fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): Flow<StreamFrame>

Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.

abstract fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): Flow<StreamFrame>

Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.

abstract fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): Flow<StreamFrame>

Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.

Link copied to clipboard

Executes a given prompt using the specified language model (LLM) and tools, providing the results as a synchronous stream of StreamFrame objects.

Link copied to clipboard
suspend fun <T> PromptExecutor.executeStructured(prompt: Prompt, model: LLModel, config: StructuredRequestConfig<T>, fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>
inline suspend fun <T> PromptExecutor.executeStructured(prompt: Prompt, model: LLModel, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>
suspend fun <T> PromptExecutor.executeStructured(prompt: Prompt, model: LLModel, serializer: KSerializer<T>, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>

Executes a prompt with structured output, enhancing it with schema instructions or native structured output parameter, and parses the response into the defined structure.

Link copied to clipboard

Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.

Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.

Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.

Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.

Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.

Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.

Link copied to clipboard

Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.

Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.

Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.

Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.

Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.

Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.

Link copied to clipboard
open suspend fun models(): List<LLModel>

Retrieves a list of available models from all LLM clients managed by this executor.

open suspend fun models(): List<LLModel>

Retrieves a list of available models from all LLM clients managed by this executor.

open suspend fun models(): List<LLModel>

Retrieves a list of available models from all LLM clients managed by this executor.

open suspend fun models(): List<LLModel>

Retrieves a list of available models from all LLM clients managed by this executor.

fun models(executorService: ExecutorService? = null): List<LLModel>
open suspend fun models(): List<LLModel>

Retrieves a list of available models from all LLM clients managed by this executor.

open suspend fun models(): List<LLModel>

Retrieves a list of available models from all LLM clients managed by this executor.

Link copied to clipboard
abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Moderates the content of a given message with attachments using a specified LLM.

abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Moderates the content of a given message with attachments using a specified LLM.

abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Moderates the content of a given message with attachments using a specified LLM.

abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Moderates the content of a given message with attachments using a specified LLM.

fun moderate(prompt: Prompt, model: LLModel, executorService: ExecutorService? = null): ModerationResult
abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Moderates the content of a given message with attachments using a specified LLM.

abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Moderates the content of a given message with attachments using a specified LLM.

Link copied to clipboard

Parses a structured response from the assistant message using the provided structured output configuration and language model. If a fixing parser is specified in the configuration, it will be used; otherwise, the structure will be parsed directly.