MultiLLMPromptExecutor

MultiLLMPromptExecutor is a class responsible for executing prompts across multiple Large Language Models (LLMs). This implementation supports direct execution with specific LLM clients or utilizes a fallback strategy if no primary LLM client is available for the requested provider.

Parameters

llmClients

A map containing LLM providers associated with their respective LLMClients.

fallback

Optional settings to configure the fallback mechanism in case a specific provider is not directly available.

Constructors

Link copied to clipboard

Constructs an executor instance with a map of LLM providers associated with their respective clients.

Initializes a new instance of the MultiLLMPromptExecutor class with multiple LLM clients.

Secondary constructor for MultiLLMPromptExecutor that accepts a list of LLMClient instances. The provided clients are processed to create a mapping of LLMProvider to their respective LLMClient.

constructor(vararg llmClients: LLMClient)

Secondary constructor for MultiLLMPromptExecutor that accepts a variable number of LLMClient instances. The provided clients are processed to create a mapping of LLMProvider to their respective LLMClient.

Types

Link copied to clipboard
data class FallbackPromptExecutorSettings(val fallbackProvider: LLMProvider, val fallbackModel: LLModel)

Represents configuration for a fallback large language model (LLM) execution strategy.

Functions

Link copied to clipboard
open override fun close()
Link copied to clipboard
open suspend override fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<Message.Response>

Executes a given prompt using the specified tools and model, and returns a list of response messages.

Link copied to clipboard
open suspend override fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Executes a given prompt using the specified tools and model and returns a list of model choices.

Link copied to clipboard
open override fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>

Executes the given prompt with the specified model and streams the response in chunks as a flow.

Link copied to clipboard
suspend fun <T> PromptExecutor.executeStructured(prompt: Prompt, model: LLModel, config: StructuredRequestConfig<T>, fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>
inline suspend fun <T> PromptExecutor.executeStructured(prompt: Prompt, model: LLModel, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>
suspend fun <T> PromptExecutor.executeStructured(prompt: Prompt, model: LLModel, serializer: KSerializer<T>, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>

Executes a prompt with structured output, enhancing it with schema instructions or native structured output parameter, and parses the response into the defined structure.

Link copied to clipboard

Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.

Link copied to clipboard

Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.

Link copied to clipboard
open suspend override fun models(): List<LLModel>

Retrieves a list of available models from all LLM clients managed by this executor.

Link copied to clipboard
open suspend override fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Moderates the provided multi-modal content using the specified model.

Link copied to clipboard

Parses a structured response from the assistant message using the provided structured output configuration and language model. If a fixing parser is specified in the configuration, it will be used; otherwise, the structure will be parsed directly.