moderate
fun moderate(prompt: Prompt, model: LLModel, executorService: ExecutorService? = null): ModerationResult(source)
Analyzes the provided prompt for violations of content policies or other moderation criteria.
Return
The result of the moderation analysis, encapsulated in a ModerationResult object.
Parameters
prompt
The input prompt to be analyzed for moderation.
model
The language model to be used for conducting the moderation analysis.
executorService
An optional ExecutorService that can be provided to control the execution context.