executeStreaming

abstract fun executeStreaming(prompt: Prompt, model: LLModel): Flow<String>(source)

Executes a prompt and returns a streaming flow of response chunks.

Return

Flow of response chunks

Parameters

prompt

The prompt to execute

model

The LLM model to use