Executes a prompt and returns a streaming flow of response chunks.
Flow of response chunks
The prompt to execute
The LLM model to use