tokenizer

Defines the tokenizer to be used for estimating token counts in text strings.

Tokenizers are critical for features or applications requiring token-level control or analysis, such as evaluating text input size relative to limits or optimizing messages for LLM prompts. By default, this is set to null, which disables token counting, but it can be replaced with a custom implementation of the Tokenizer interface.

Assigning a different tokenizer allows for customizable token estimation strategies with varying levels of accuracy and performance depending on the use case.