AIAgentLLMWriteSessionCommon

Common base implementation for mutable LLM sessions shared across platform-specific actual classes.

Inheritors

Properties

Link copied to clipboard
Link copied to clipboard
Link copied to clipboard
Link copied to clipboard

Represents the active language model used within the session.

Link copied to clipboard

Represents the current prompt associated with the LLM session.

Link copied to clipboard

Represents the active response processor within the session.

Link copied to clipboard
Link copied to clipboard

Provides a list of available tools in the session.

Functions

Link copied to clipboard

Appends messages to the current prompt using PromptBuilder.

Link copied to clipboard
fun changeLLMParams(newParams: LLMParams)

Updates LLM parameters on the current prompt.

Link copied to clipboard
fun changeModel(newModel: LLModel)

Updates the active model in this session.

Link copied to clipboard

Clears the history of messages in the current AI Agent LLM Write Session.

Link copied to clipboard
open override fun close()
Link copied to clipboard
fun dropLastNMessages(n: Int, preserveSystemMessages: Boolean = true)

Removes the last n messages from the current prompt in the write session.

Link copied to clipboard

Drops all trailing tool call messages from the current prompt

Link copied to clipboard

Finds a specific tool instance from the tool registry by a tool instance type.

Finds a specific tool instance from the tool registry by tool class.

Link copied to clipboard
fun leaveLastNMessages(n: Int, preserveSystemMessages: Boolean = true)

Keeps only the last N messages in the session's prompt by removing all earlier messages.

Link copied to clipboard
fun leaveMessagesFromTimestamp(timestamp: Instant, preserveSystemMessages: Boolean = true)

Removes all messages from the current session's prompt that have a timestamp earlier than the specified timestamp.

Link copied to clipboard

Parses a structured response from an assistant message using the specified configuration.

Link copied to clipboard
suspend fun replaceHistoryWithTLDR(strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory, preserveMemory: Boolean = true)

Rewrites LLM message history, leaving only user message and resulting TLDR.

Link copied to clipboard

Sends a request to LLM and appends the response to the prompt.

Link copied to clipboard
suspend fun requestLLMForceOneTool(tool: Tool<*, *>): Message.Response

Sends a request while forcing a specific tool and appends the response to the prompt.

Link copied to clipboard

Sends a request to LLM and appends all received responses to the prompt.

Link copied to clipboard

Sends a request to LLM and returns all available response choices.

Link copied to clipboard

Sends a request that enforces tool calling and appends all received responses to the prompt.

Link copied to clipboard

Sends a request without tool usage and appends all received responses to the prompt.

Link copied to clipboard

Sends a request that enforces tool calling and appends the received response to the prompt.

Link copied to clipboard

Sends a streaming request to LLM.

suspend fun requestLLMStreaming(definition: StructureDefinition? = null): Flow<StreamFrame>

Streams a response from LLM, optionally adding a structure definition to the prompt beforehand.

Link copied to clipboard
suspend fun <T> requestLLMStructured(serializer: KSerializer<T>, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>

Sends a request to LLM and gets a structured response, appending the assistant message on success.

inline suspend fun <T> requestLLMStructured(examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>

Requests a structured response from the language model using a reified serializer.

Link copied to clipboard

Sends a request without tool usage and appends the received response to the prompt.

Link copied to clipboard
suspend fun requestModeration(moderatingModel: LLModel? = null): ModerationResult

Sends a moderation request using the specified moderating model or the session model.

Link copied to clipboard
fun rewritePrompt(body: (prompt: Prompt) -> Prompt)

Rewrites the current prompt by applying a transformation function.

Link copied to clipboard

Sets the ai.koog.prompt.params.LLMParams.ToolChoice for this LLM session.

Link copied to clipboard

Set the ai.koog.prompt.params.LLMParams.ToolChoice to ai.koog.prompt.params.LLMParams.ToolChoice.Auto to make LLM automatically decide between calling tools and generating text

Link copied to clipboard
inline fun <TArgs, TResult> Flow<TArgs>.toParallelToolCalls(safeTool: SafeTool<TArgs, TResult>, concurrency: Int = 16): Flow<SafeTool.Result<TResult>>

Converts each flow item into a parallel tool call using an already resolved SafeTool.

inline fun <TArgs, TResult> Flow<TArgs>.toParallelToolCalls(tool: Tool<TArgs, TResult>, concurrency: Int = 16): Flow<SafeTool.Result<TResult>>

Converts each flow item into a parallel tool call using a tool instance.

inline fun <TArgs, TResult> Flow<TArgs>.toParallelToolCalls(toolClass: KClass<out Tool<TArgs, TResult>>, concurrency: Int = 16): Flow<SafeTool.Result<TResult>>

Converts each flow item into a parallel tool call using a tool class.

Link copied to clipboard
inline fun <TArgs, TResult> Flow<TArgs>.toParallelToolCallsRaw(safeTool: SafeTool<TArgs, TResult>, concurrency: Int = 16): Flow<String>

Converts each flow item into a parallel tool call and emits only raw string content.

inline fun <TArgs, TResult> Flow<TArgs>.toParallelToolCallsRaw(toolClass: KClass<out Tool<TArgs, TResult>>, concurrency: Int = 16): Flow<String>

Converts each flow item into a parallel tool call using a tool class and emits raw string content.

Link copied to clipboard

Unset the ai.koog.prompt.params.LLMParams.ToolChoice. Mostly, if left unspecified, the default value of this parameter is ai.koog.prompt.params.LLMParams.ToolChoice.Auto

Link copied to clipboard

Updates the current prompt by applying modifications defined in the provided block.