PromptExecutor
An interface representing an executor for processing LLM prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.
Implements AutoCloseable as prompt executors typically work with LLM clients. Always close it when finished.
Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.
An interface representing an executor for processing LLM prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.
Implements AutoCloseable as prompt executors typically work with LLM clients. Always close it when finished.
Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.
An interface representing an executor for processing LLM prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.
Implements AutoCloseable as prompt executors typically work with LLM clients. Always close it when finished.
Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.
Inheritors
An interface representing an executor for processing LLM prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.
Implements AutoCloseable as prompt executors typically work with LLM clients. Always close it when finished.
Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.
An interface representing an executor for processing LLM prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.
Implements AutoCloseable as prompt executors typically work with LLM clients. Always close it when finished.
Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.
An interface representing an executor for processing LLM prompts. This defines methods for executing prompts against models with or without tool assistance, as well as for streaming responses.
Implements AutoCloseable as prompt executors typically work with LLM clients. Always close it when finished.
Note: a single PromptExecutor might embed multiple LLM clients for different LLM providers supporting different models.
Constructors
Types
Companion object for PromptExecutor.
Functions
Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.
Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.
Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.
Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.
Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.
Executes a given prompt using the specified LLM and tools, returning a list of responses from the model.
Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.
Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.
Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.
Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.
Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.
Receives multiple independent choices from the LLM. The method is implemented only for some specific providers which support multiple LLM choices.
Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.
Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.
Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.
Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.
Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.
Executes a given prompt using the specified LLM and returns a stream of output as a flow of StreamFrame objects.
Executes a given prompt using the specified language model (LLM) and tools, providing the results as a synchronous stream of StreamFrame objects.
Executes a prompt with structured output, enhancing it with schema instructions or native structured output parameter, and parses the response into the defined structure.
Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.
Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.
Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.
Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.
Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.
Basic JSON schema generator required for the given model. Return BasicJsonSchemaGenerator by default.
Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.
Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.
Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.
Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.
Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.
Standard JSON schema generator required for the given model. Return StandardJsonSchemaGenerator by default.
Retrieves a list of available models from all LLM clients managed by this executor.
Retrieves a list of available models from all LLM clients managed by this executor.
Retrieves a list of available models from all LLM clients managed by this executor.
Retrieves a list of available models from all LLM clients managed by this executor.
Retrieves a list of available models from all LLM clients managed by this executor.
Moderates the content of a given message with attachments using a specified LLM.
Moderates the content of a given message with attachments using a specified LLM.
Moderates the content of a given message with attachments using a specified LLM.
Moderates the content of a given message with attachments using a specified LLM.
Moderates the content of a given message with attachments using a specified LLM.
Moderates the content of a given message with attachments using a specified LLM.
Parses a structured response from the assistant message using the provided structured output configuration and language model. If a fixing parser is specified in the configuration, it will be used; otherwise, the structure will be parsed directly.