LLMClient

actual abstract class LLMClient : LLMClientAPI(source)

Common interface for direct communication with LLM providers. This interface defines methods for executing prompts and streaming responses.

Implements AutoCloseable as LLM clients typically work with IO resources. Always close it when finished.

actual abstract class LLMClient : LLMClientAPI(source)

Common interface for direct communication with LLM providers. This interface defines methods for executing prompts and streaming responses.

Implements AutoCloseable as LLM clients typically work with IO resources. Always close it when finished.

expect abstract class LLMClient : LLMClientAPI(source)

Common interface for direct communication with LLM providers. This interface defines methods for executing prompts and streaming responses.

Implements AutoCloseable as LLM clients typically work with IO resources. Always close it when finished.

Inheritors

actual abstract class LLMClient : LLMClientAPI(source)

Common interface for direct communication with LLM providers. This interface defines methods for executing prompts and streaming responses.

Implements AutoCloseable as LLM clients typically work with IO resources. Always close it when finished.

actual abstract class LLMClient : LLMClientAPI(source)

Common interface for direct communication with LLM providers. This interface defines methods for executing prompts and streaming responses.

Implements AutoCloseable as LLM clients typically work with IO resources. Always close it when finished.

actual abstract class LLMClient : LLMClientAPI(source)

Common interface for direct communication with LLM providers. This interface defines methods for executing prompts and streaming responses.

Implements AutoCloseable as LLM clients typically work with IO resources. Always close it when finished.

Constructors

Link copied to clipboard
actual constructor()
actual constructor()
expect constructor()
actual constructor()
actual constructor()
actual constructor()

Properties

Link copied to clipboard
open val clientName: String

The name of the client.

open val clientName: String

The name of the client.

open val clientName: String

The name of the client.

open val clientName: String

The name of the client.

open val clientName: String

The name of the client.

open val clientName: String

The name of the client.

Functions

close
Link copied to clipboard
abstract fun close()
abstract fun close()
expect abstract fun close()
abstract fun close()
abstract fun close()
abstract fun close()
Link copied to clipboard
abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a prompt and returns a list of response messages.

abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a prompt and returns a list of response messages.

abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a prompt and returns a list of response messages.

abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a prompt and returns a list of response messages.

fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList(), executorService: ExecutorService? = null): List<Message.Response>
abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a prompt and returns a list of response messages.

abstract suspend fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<Message.Response>

Executes a prompt and returns a list of response messages.

Link copied to clipboard
open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<LLMChoice>

Executes a prompt and returns a list of LLM choices.

open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<LLMChoice>

Executes a prompt and returns a list of LLM choices.

open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<LLMChoice>

Executes a prompt and returns a list of LLM choices.

open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<LLMChoice>

Executes a prompt and returns a list of LLM choices.

fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList(), executorService: ExecutorService? = null): List<LLMChoice>
open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<LLMChoice>

Executes a prompt and returns a list of LLM choices.

open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor> = emptyList()): List<LLMChoice>

Executes a prompt and returns a list of LLM choices.

Link copied to clipboard
open fun executeStreaming(prompt: Prompt, model: LLModel): Flow<StreamFrame>
open fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>

Executes a prompt and returns a streaming flow of response chunks.

open fun executeStreaming(prompt: Prompt, model: LLModel): Flow<StreamFrame>
open fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>

Executes a prompt and returns a streaming flow of response chunks.

open fun executeStreaming(prompt: Prompt, model: LLModel): Flow<StreamFrame>
open fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>

Executes a prompt and returns a streaming flow of response chunks.

open fun executeStreaming(prompt: Prompt, model: LLModel): Flow<StreamFrame>
open fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>

Executes a prompt and returns a streaming flow of response chunks.

open fun executeStreaming(prompt: Prompt, model: LLModel): Flow<StreamFrame>
open fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>

Executes a prompt and returns a streaming flow of response chunks.

open fun executeStreaming(prompt: Prompt, model: LLModel): Flow<StreamFrame>
open fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>

Executes a prompt and returns a streaming flow of response chunks.

Link copied to clipboard

Executes a given prompt using the specified language model (LLM) and tools, providing the results as a synchronous stream of StreamFrame objects.

Link copied to clipboard

Basic JSON schema generator supported by the LLMClient. Return BasicJsonSchemaGenerator by default.

Basic JSON schema generator supported by the LLMClient. Return BasicJsonSchemaGenerator by default.

Basic JSON schema generator supported by the LLMClient. Return BasicJsonSchemaGenerator by default.

Basic JSON schema generator supported by the LLMClient. Return BasicJsonSchemaGenerator by default.

Basic JSON schema generator supported by the LLMClient. Return BasicJsonSchemaGenerator by default.

Basic JSON schema generator supported by the LLMClient. Return BasicJsonSchemaGenerator by default.

Link copied to clipboard

Standard JSON schema generator supported by the LLMClient. Return StandardJsonSchemaGenerator by default.

Standard JSON schema generator supported by the LLMClient. Return StandardJsonSchemaGenerator by default.

Standard JSON schema generator supported by the LLMClient. Return StandardJsonSchemaGenerator by default.

Standard JSON schema generator supported by the LLMClient. Return StandardJsonSchemaGenerator by default.

Standard JSON schema generator supported by the LLMClient. Return StandardJsonSchemaGenerator by default.

Standard JSON schema generator supported by the LLMClient. Return StandardJsonSchemaGenerator by default.

Link copied to clipboard
abstract fun llmProvider(): LLMProvider

Retrieves the LLMProvider instance associated with this client.

abstract fun llmProvider(): LLMProvider

Retrieves the LLMProvider instance associated with this client.

abstract fun llmProvider(): LLMProvider

Retrieves the LLMProvider instance associated with this client.

abstract fun llmProvider(): LLMProvider

Retrieves the LLMProvider instance associated with this client.

abstract fun llmProvider(): LLMProvider

Retrieves the LLMProvider instance associated with this client.

abstract fun llmProvider(): LLMProvider

Retrieves the LLMProvider instance associated with this client.

Link copied to clipboard
open suspend fun models(): List<LLModel>

Retrieves a list of ids of available Large Language Models (LLMs) supported by the client.

open suspend fun models(): List<LLModel>

Retrieves a list of ids of available Large Language Models (LLMs) supported by the client.

open suspend fun models(): List<LLModel>

Retrieves a list of ids of available Large Language Models (LLMs) supported by the client.

open suspend fun models(): List<LLModel>

Retrieves a list of ids of available Large Language Models (LLMs) supported by the client.

fun models(executorService: ExecutorService? = null): List<LLModel>
open suspend fun models(): List<LLModel>

Retrieves a list of ids of available Large Language Models (LLMs) supported by the client.

open suspend fun models(): List<LLModel>

Retrieves a list of ids of available Large Language Models (LLMs) supported by the client.

Link copied to clipboard
abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Analyzes the provided prompt for violations of content policies or other moderation criteria.

abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Analyzes the provided prompt for violations of content policies or other moderation criteria.

abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Analyzes the provided prompt for violations of content policies or other moderation criteria.

abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Analyzes the provided prompt for violations of content policies or other moderation criteria.

fun moderate(prompt: Prompt, model: LLModel, executorService: ExecutorService? = null): ModerationResult
abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Analyzes the provided prompt for violations of content policies or other moderation criteria.

abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Analyzes the provided prompt for violations of content policies or other moderation criteria.

Link copied to clipboard
fun LLMClient.toRetryingClient(retryConfig: RetryConfig = RetryConfig.DEFAULT): RetryingLLMClient

Converts an instance of LLMClient into a retrying client with customizable retry behavior.