SingleLLMPromptExecutor

Deprecated

Please use MultiLLMPromptExecutor instead

Replace with

import ai.koog.prompt.executor.llms.MultiLLMPromptExecutor
MultiLLMPromptExecutor

Executes prompts using a direct client for communication with large language model (LLM) providers.

This class provides functionality to execute prompts with optional tools and retrieve either a list of responses or a streaming flow of response chunks from the LLM provider. It delegates the actual LLM interaction to the provided implementation of LLMClient.

Parameters

llmClient

The client used for direct communication with the LLM provider.

Constructors

Link copied to clipboard
constructor(llmClient: LLMClient)

Creates an instance of LLMPromptExecutor.

Functions

Link copied to clipboard
open override fun close()
Link copied to clipboard
open suspend override fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<Message.Response>

Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.

Link copied to clipboard
open suspend override fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.

Link copied to clipboard
open override fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>

Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.

Link copied to clipboard
suspend fun <T> PromptExecutor.executeStructured(prompt: Prompt, model: LLModel, config: StructuredRequestConfig<T>, fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>
inline suspend fun <T> PromptExecutor.executeStructured(prompt: Prompt, model: LLModel, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>
suspend fun <T> PromptExecutor.executeStructured(prompt: Prompt, model: LLModel, serializer: KSerializer<T>, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>

Executes a prompt with structured output, enhancing it with schema instructions or native structured output parameter, and parses the response into the defined structure.

Link copied to clipboard

Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.

Link copied to clipboard

Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.

Link copied to clipboard
open suspend override fun models(): List<LLModel>

Retrieves a list of available models from all LLM clients managed by this executor.

Link copied to clipboard
open suspend override fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Moderates the content of a given message with attachments using a specified LLM.

Link copied to clipboard

Parses a structured response from the assistant message using the provided structured output configuration and language model. If a fixing parser is specified in the configuration, it will be used; otherwise, the structure will be parsed directly.