Moderation

Represents a capability in the Large Language Model (LLM) for content moderation.

This capability allows the model to analyze text for potentially harmful content and classify it according to various categories such as harassment, hate speech, self-harm, sexual content, violence, etc.

Properties

Link copied to clipboard
val id: String

The unique identifier for this capability.