OllamaClient
class OllamaClient(val baseUrl: String = "http://localhost:11434", baseClient: HttpClient = HttpClient(engineFactoryProvider()), timeoutConfig: ConnectionTimeoutConfig = ConnectionTimeoutConfig(), clock: Clock = Clock.System, contextWindowStrategy: ContextWindowStrategy = ContextWindowStrategy.Companion.None) : LLMClient, LLMEmbeddingProvider(source)
Client for interacting with the Ollama API with comprehensive model support.
Implements:
LLMClient for executing prompts and streaming responses.
LLMEmbeddingProvider for generating embeddings from input text.
Parameters
baseUrl
The base URL of the Ollama server. Defaults to "http://localhost:11434".
baseClient
The underlying HTTP client used for making requests.
timeoutConfig
Configuration for connection, request, and socket timeouts.
clock
Clock instance used for tracking response metadata timestamps.
contextWindowStrategy
The ContextWindowStrategy to use for computing context window lengths. Defaults to ContextWindowStrategy.None.
Constructors
Link copied to clipboard
constructor(baseUrl: String = "http://localhost:11434", baseClient: HttpClient = HttpClient(engineFactoryProvider()), timeoutConfig: ConnectionTimeoutConfig = ConnectionTimeoutConfig(), clock: Clock = Clock.System, contextWindowStrategy: ContextWindowStrategy = ContextWindowStrategy.Companion.None)
Functions
Link copied to clipboard
open suspend override fun execute(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<Message.Response>
Link copied to clipboard
open suspend fun executeMultipleChoices(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): List<LLMChoice>
Link copied to clipboard
open override fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>
Link copied to clipboard
Returns a model card by its model name, on null if no such model exists on the server.
Link copied to clipboard
Returns the model cards for all the available models on the server.
Link copied to clipboard