RetryingLLMClient

class RetryingLLMClient(delegate: LLMClient, config: RetryConfig = RetryConfig()) : LLMClient(source)

A decorator that adds retry capabilities to any LLMClient implementation.

This is a pure decorator - it has no knowledge of specific providers or implementations. It simply wraps any LLMClient and retries operations based on configurable policies.

Example usage:

val client = AnthropicLLMClient(apiKey)
val retryingClient = RetryingLLMClient(client, RetryConfig.CONSERVATIVE)

Parameters

delegate

The LLMClient to wrap with retry logic

config

Configuration for retry behavior

Constructors

Link copied to clipboard
constructor(delegate: LLMClient, config: RetryConfig = RetryConfig())

Functions

Link copied to clipboard
open suspend override fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<Message.Response>

Executes a prompt and returns a list of response messages.

Link copied to clipboard
open suspend override fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Executes a prompt and returns a list of LLM choices.

Link copied to clipboard
open override fun executeStreaming(prompt: Prompt, model: LLModel): Flow<String>

Executes a prompt and returns a streaming flow of response chunks.

Link copied to clipboard
open suspend override fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Analyzes the provided prompt for violations of content policies or other moderation criteria.