onLLMStreamingStarting
fun onLLMStreamingStarting(handler: suspend (eventContext: LLMStreamingStartingContext) -> Unit)(source)
Registers a handler to be invoked before streaming from a language model begins.
This handler is called immediately before starting a streaming operation, allowing you to perform preprocessing, validation, or logging of the streaming request.
Parameters
handler
The handler function that receives a LLMStreamingStartingContext containing the run ID, prompt, model, and available tools for the streaming session.
Example:
onLLMStreamingStarting { eventContext ->
logger.info("Starting stream for run: ${eventContext.runId}")
logger.debug("Prompt: ${eventContext.prompt}")
}
Content copied to clipboard