moderate

abstract suspend fun moderate(prompt: Prompt, model: LLModel): ModerationResult(source)

Moderates the content of a given message with attachments using a specified LLM.

This method evaluates the content of the message and its attachments to determine if it complies with content guidelines. The moderation is performed using the provided LLM, which analyzes the content and returns a detailed moderation result.

Return

A ModerationResult containing information about the moderation outcome, including flagged categories, scores, and whether the content is classified as harmful.

Parameters

model

The LLM that will be used to perform the moderation.