AIAgentLLMReadSessionCommon

Common base implementation for read-only LLM sessions shared across platform-specific actual classes.

Inheritors

Properties

Link copied to clipboard

Config of the agent running the session.

Link copied to clipboard

Represents the active language model used within the session.

Link copied to clipboard

Represents the current prompt associated with the LLM session. The prompt contains the input messages, model configuration, and parameters.

Link copied to clipboard

Represents the active response processor within the session.

Link copied to clipboard

Provides a list of available tools in the session.

Functions

Link copied to clipboard
open override fun close()
Link copied to clipboard

Executes a request for the provided prompt and tools and returns all response messages.

Link copied to clipboard

Executes a request for the provided prompt and tools and returns the first response.

Link copied to clipboard

Executes a streaming request for the provided prompt and tools.

Link copied to clipboard

Parses a structured response from a language model message using the specified configuration.

Link copied to clipboard

Sends a request to the underlying LLM and returns the first non-reasoning response.

Link copied to clipboard
suspend fun requestLLMForceOneTool(tool: Tool<*, *>): Message.Response

Sends a request to the language model while enforcing the use of a specific tool.

Link copied to clipboard

Sends a request to the language model and returns all response messages.

Link copied to clipboard

Sends a request to the language model and returns all available response choices.

Link copied to clipboard

Sends a request to the language model that enforces the usage of tools and retrieves all responses.

Link copied to clipboard

Sends a request to the language model without utilizing any tools and returns multiple responses.

Link copied to clipboard

Sends a request to the language model that enforces the usage of tools and retrieves the response.

Link copied to clipboard

Sends a streaming request to the underlying LLM and returns the streamed response.

Link copied to clipboard
suspend fun <T> requestLLMStructured(serializer: KSerializer<T>, examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>

Sends a request to LLM and gets a structured response.

inline suspend fun <T> requestLLMStructured(examples: List<T> = emptyList(), fixingParser: StructureFixingParser? = null): Result<StructuredResponse<T>>

Requests a structured response from the language model using a reified serializer.

Link copied to clipboard

Sends a request to the language model without utilizing any tools and returns the response.

Link copied to clipboard
suspend fun requestModeration(moderatingModel: LLModel? = null): ModerationResult

Sends a moderation request to the specified or default model.