MistralAIParams
MistralAI chat-completions parameters layered on top of LLMParams.
Constructors
Properties
Number in -2.0, 2.0—penalizes the repetition of words based on their frequency in the generated text.
Allow multiple tool calls in parallel.
Number in -2.0, 2.0—determines how much the model penalizes the repetition of words or phrases.
Allows toggling between the reasoning mode and no system prompt. When set to reasoning the system prompt for reasoning models will be used.
The seed to use for random sampling. If set, different calls will generate deterministic results.
Whether to inject a safety prompt before all conversations.
Nucleus sampling, where the model considers the results of the tokens with topP probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
Functions
Creates a copy of this instance with the ability to modify any of its properties.