executeStreaming

open override fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>(source)

Streams LLM responses by subscribing to ChatModel.stream and converting each chunk into Koog StreamFrame events.

Text content is emitted as StreamFrame.TextDelta frames immediately. Tool calls are handled by a SpringAiToolCallAssembler whose mode depends on the detected LLMProvider:

The resulting flow uses ai.koog.prompt.streaming.StreamFrameFlowBuilder which automatically pairs each StreamFrame.ToolCallDelta with a corresponding StreamFrame.ToolCallComplete and emits StreamFrame.TextComplete / StreamFrame.ReasoningComplete boundaries.

All blocking I/O runs on the configured dispatcher (default Dispatchers.IO).