ON_LLM_CALL

Prompt messages are ingested before each LLM call starts, and the assistant output is ingested after the call completes (or after stream completion for streaming calls). Enables intra-session RAG and provides crash resilience.

Properties

Link copied to clipboard
expect val name: String
Link copied to clipboard
expect val ordinal: Int