OllamaClient

class OllamaClient(baseUrl: String = "http://localhost:11434", baseClient: HttpClient = HttpClient(engineFactoryProvider()), timeoutConfig: ConnectionTimeoutConfig = ConnectionTimeoutConfig(), clock: Clock = Clock.System) : LLMClient, LLMEmbeddingProvider(source)

Client for interacting with the Ollama API with comprehensive model support.

Parameters

baseUrl

The base URL of the Ollama server. Defaults to "http://localhost:11434".

baseClient

The underlying HTTP client used for making requests.

timeoutConfig

Configuration for connection, request, and socket timeouts.

clock

Clock instance used for tracking response metadata timestamps. Implements:

  • LLMClient for executing prompts and streaming responses.

  • LLMEmbeddingProvider for generating embeddings from input text.

Constructors

Link copied to clipboard
constructor(baseUrl: String = "http://localhost:11434", baseClient: HttpClient = HttpClient(engineFactoryProvider()), timeoutConfig: ConnectionTimeoutConfig = ConnectionTimeoutConfig(), clock: Clock = Clock.System)

Functions

Link copied to clipboard
open suspend override fun embed(text: String, model: LLModel): List<Double>

Embeds the given text using the Ollama model.

Link copied to clipboard
open suspend override fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<Message.Response>
Link copied to clipboard
open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>
Link copied to clipboard
open override fun executeStreaming(prompt: Prompt, model: LLModel): Flow<String>
Link copied to clipboard
suspend fun getModelOrNull(name: String, pullIfMissing: Boolean = false): OllamaModelCard?

Returns a model card by its model name, on null if no such model exists on the server.

Link copied to clipboard

Returns the model cards for all the available models on the server.

Link copied to clipboard
open suspend override fun moderate(prompt: Prompt, model: LLModel): ModerationResult