executeStreaming
Streams LLM responses by subscribing to ChatModel.stream and converting each chunk into Koog StreamFrame events.
Text content is emitted as StreamFrame.TextDelta frames immediately. Tool calls are handled by a SpringAiToolCallAssembler whose mode depends on the detected LLMProvider:
Anthropic / Google (SpringAiToolStreamingMode.EMIT_IMMEDIATELY): tool calls arrive fully formed in each chunk and are emitted immediately.
OpenAI and unknown providers (SpringAiToolStreamingMode.BUFFER_UNTIL_END): tool call fragments are buffered across chunks and emitted as complete tool calls after the stream ends.
The resulting flow uses ai.koog.prompt.streaming.StreamFrameFlowBuilder which automatically pairs each StreamFrame.ToolCallDelta with a corresponding StreamFrame.ToolCallComplete and emits StreamFrame.TextComplete / StreamFrame.ReasoningComplete boundaries.
All blocking I/O runs on the configured dispatcher (default Dispatchers.IO).