RetryingLLMClient

class RetryingLLMClient @JvmOverloads constructor(delegate: LLMClient, config: RetryConfig = RetryConfig()) : LLMClient(source)

A decorator that adds retry capabilities to any LLMClient implementation.

This is a pure decorator - it has no knowledge of specific providers or implementations. It simply wraps any LLMClient and retries operations based on configurable policies.

Example usage:

val client = AnthropicLLMClient(apiKey)
val retryingClient = RetryingLLMClient(client, RetryConfig.CONSERVATIVE)

Parameters

delegate

The LLMClient to wrap with retry logic

config

Configuration for retry behavior

Constructors

Link copied to clipboard
constructor(delegate: LLMClient, config: RetryConfig = RetryConfig())

Properties

Link copied to clipboard
open val clientName: String

The name of the client.

Functions

Link copied to clipboard
open override fun close()
Link copied to clipboard
open suspend override fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<Message.Response>

Executes a prompt and returns a list of response messages.

Link copied to clipboard
open suspend override fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Executes a prompt and returns a list of LLM choices.

Link copied to clipboard
open fun executeStreaming(prompt: Prompt, model: LLModel): Flow<StreamFrame>
open override fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>

Executes a prompt and returns a streaming flow of response chunks.

Link copied to clipboard

Basic JSON schema generator supported by the LLMClient. Return BasicJsonSchemaGenerator by default.

Link copied to clipboard

Standard JSON schema generator supported by the LLMClient. Return StandardJsonSchemaGenerator by default.

Link copied to clipboard
open override fun llmProvider(): LLMProvider

Retrieves the configured instance of the LLMProvider in use.

Link copied to clipboard
open suspend override fun models(): List<LLModel>

Retrieves a list of ids of available Large Language Models (LLMs) supported by the client.

Link copied to clipboard
open suspend override fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Analyzes the provided prompt for violations of content policies or other moderation criteria.

Link copied to clipboard
fun LLMClient.toRetryingClient(retryConfig: RetryConfig = RetryConfig.DEFAULT): RetryingLLMClient

Converts an instance of LLMClient into a retrying client with customizable retry behavior.