SpringAiLLMClient

class SpringAiLLMClient(chatModel: ChatModel, provider: LLMProvider, clock: Clock, dispatcher: CoroutineDispatcher, chatOptionsCustomizer: ChatOptionsCustomizer, moderationModel: ModerationModel?) : LLMClient(source)

An LLMClient implementation that delegates to a Spring AI ChatModel.

This adapter allows Koog agents to use any Spring AI chat model provider (Anthropic, OpenAI, Google, Ollama, etc.) as their underlying LLM backend.

Tool execution is always owned by the Koog agent framework. Spring AI receives only tool definitions (via org.springframework.ai.tool.ToolCallback with a throwing call()) and internalToolExecutionEnabled=false, so Spring never attempts to execute tools.

Parameters

chatModel

the Spring AI chat model to delegate to

provider

the LLMProvider to report for this client

clock

the clock used for creating response metadata timestamps

dispatcher

the CoroutineDispatcher used for blocking model calls

chatOptionsCustomizer

optional customizer for provider-specific ChatOptions tuning

moderationModel

optional Spring AI ModerationModel for content moderation; if null, moderate throws UnsupportedOperationException

Constructors

Link copied to clipboard
constructor(chatModel: ChatModel, provider: LLMProvider, clock: Clock, dispatcher: CoroutineDispatcher, chatOptionsCustomizer: ChatOptionsCustomizer, moderationModel: ModerationModel?)

Types

Link copied to clipboard
class Builder

A Java-friendly builder for SpringAiLLMClient.

Link copied to clipboard
object Companion

Java-friendly builder access.

Properties

Link copied to clipboard
open val clientName: String

Functions

Link copied to clipboard
open override fun close()
Link copied to clipboard
fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>, executorService: ExecutorService?): List<Message.Response>
open suspend override fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<Message.Response>
Link copied to clipboard
open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>
Link copied to clipboard
open fun executeStreaming(prompt: Prompt, model: LLModel): Flow<StreamFrame>

open override fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>

Streams LLM responses by subscribing to ChatModel.stream and converting each chunk into Koog StreamFrame events.

Link copied to clipboard
open override fun llmProvider(): LLMProvider
Link copied to clipboard

open suspend override fun models(): List<LLModel>

Returns the list with one model based on the configured LLMProvider and ChatModel without capabilities or parameters.

Link copied to clipboard
fun moderate(prompt: Prompt, model: LLModel, executorService: ExecutorService?): ModerationResult
open suspend override fun moderate(prompt: Prompt, model: LLModel): ModerationResult