LLMStreamingStartingHandler

A functional interface implemented to handle logic that occurs before streaming from a large language model (LLM) begins. It allows preprocessing steps or validation based on the provided prompt, available tools, targeted LLM model, and a unique run identifier.

This can be particularly useful for custom input manipulation, logging, validation, or applying configurations to the streaming request based on external context.

Functions

Link copied to clipboard
abstract suspend fun handle(eventContext: LLMStreamingStartingContext)

Handles the initialization of a streaming interaction by processing the given prompt, tools, model, and run ID.