CapturingLLMClient

class CapturingLLMClient(executeResponses: List<Message.Response> = emptyList(), streamingChunks: List<String> = emptyList(), choices: List<LLMChoice> = emptyList(), moderationResult: ModerationResult = ModerationResult(isHarmful = false, categories = emptyMap())) : LLMClient(source)

A test double implementation of LLMClient that captures the last inputs provided to each API and returns predefined responses. This is useful in unit and integration tests to assert that a component under test interacts with an LLM client as expected without making real network calls.

Constructor parameters allow you to predefine what each method should return.

Constructors

Link copied to clipboard
constructor(executeResponses: List<Message.Response> = emptyList(), streamingChunks: List<String> = emptyList(), choices: List<LLMChoice> = emptyList(), moderationResult: ModerationResult = ModerationResult(isHarmful = false, categories = emptyMap()))

Properties

Link copied to clipboard

The last LLModel passed to executeMultipleChoices, or null if it hasn't been called yet.

Link copied to clipboard

The last Prompt passed to executeMultipleChoices, or null if it hasn't been called yet.

Link copied to clipboard

The last list of tools passed to executeMultipleChoices, or null if it hasn't been called yet.

Link copied to clipboard

The last LLModel passed to execute, or null if it hasn't been called yet.

Link copied to clipboard

The last Prompt passed to execute, or null if it hasn't been called yet.

Link copied to clipboard

The last list of tools passed to execute, or null if it hasn't been called yet.

Link copied to clipboard

The last LLModel passed to moderate, or null if it hasn't been called yet.

Link copied to clipboard

The last Prompt passed to moderate, or null if it hasn't been called yet.

Link copied to clipboard

The last LLModel passed to executeStreaming, or null if it hasn't been called yet.

Link copied to clipboard

The last Prompt passed to executeStreaming, or null if it hasn't been called yet.

Functions

Link copied to clipboard
open suspend override fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<Message.Response>

Simulates a non-streaming LLM execution. Captures input parameters and returns the predefined executeResponses.

Link copied to clipboard
open suspend override fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>

Simulates an LLM call that returns multiple choices. Captures input parameters and returns the predefined choices.

Link copied to clipboard
open override fun executeStreaming(prompt: Prompt, model: LLModel): Flow<String>

Simulates a streaming LLM execution. Captures input parameters and emits the predefined streamingChunks.

Link copied to clipboard
open suspend override fun moderate(prompt: Prompt, model: LLModel): ModerationResult

Simulates a content moderation call. Captures input parameters and returns the predefined moderationResult.