LLMStreamingCompletedHandler

Represents a functional interface for handling operations or logic that should occur after streaming from a large language model (LLM) is complete. The implementation of this interface provides a mechanism to perform custom logic or processing based on the provided inputs, such as the prompt, tools, model, and the completion of the stream.

Functions

Link copied to clipboard
abstract suspend fun handle(eventContext: LLMStreamingCompletedContext)

Handles the post-processing of a streaming session and its associated data after streaming completes.