moderate

open suspend override fun moderate(prompt: Prompt, model: LLModel): ModerationResult(source)

Moderates the provided prompt using specified moderation guardrails settings. The method evaluates both input and output of the prompt against guardrails and determines if either is harmful, returning a corresponding result.

Requires moderationGuardrailsSettings to be set for this BedrockLLMClient

Note: model parameter is unused here

Return

a ModerationResult indicating whether the content is harmful and a map of categorized moderation results.

Parameters

prompt

the input text/content to be evaluated.

model

the language learning model to be used for evaluation.

Throws

if moderation guardrails settings are not provided.