MultiLLMPromptExecutor

MultiLLMPromptExecutor is a class responsible for executing prompts across multiple Large Language Models (LLMs). This implementation supports direct execution with specific LLM clients or utilizes a fallback strategy if no primary LLM client is available for the requested provider.

Parameters

llmClients

A map containing LLM providers associated with their respective LLMClients.

fallback

Optional settings to configure the fallback mechanism in case a specific provider is not directly available.

Constructors

Link copied to clipboard
constructor(vararg llmClients: Pair<LLMProvider, LLMClient>)

Initializes a new instance of the MultiLLMPromptExecutor class with multiple LLM clients.

constructor(llmClients: Map<LLMProvider, LLMClient>, fallback: MultiLLMPromptExecutor.FallbackPromptExecutorSettings? = null)

Constructs an executor instance with a map of LLM providers associated with their respective clients.

Types

Link copied to clipboard
data class FallbackPromptExecutorSettings(val fallbackProvider: LLMProvider, val fallbackModel: LLModel)

Represents configuration for a fallback large language model (LLM) execution strategy.

Functions

Link copied to clipboard
open suspend override fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<Message.Response>

Executes a given prompt using the specified tools and model, and returns a list of response messages.

open suspend fun execute(prompt: Prompt, model: LLModel): String
Link copied to clipboard
open suspend override fun executeStreaming(prompt: Prompt, model: LLModel): Flow<String>

Executes the given prompt with the specified model and streams the response in chunks as a flow.