executeStreaming
open override fun executeStreaming(prompt: Prompt, model: LLModel, tools: List<ToolDescriptor>): Flow<StreamFrame>(source)
Executes a prompt and returns a streaming flow of response chunks.
Return
Flow of response chunks
Parameters
prompt
The prompt to execute
model
The LLM model to use
tools
Optional list of tools that can be used by the LLM