ModerationCategory

Represents categories for content moderation used to classify potentially harmful or inappropriate content. These categories help identify specific types of violations that content may fall under.

Inheritors

Constructors

Link copied to clipboard
constructor(name: String)

Types

Link copied to clipboard

Responses that are both verifiably false and likely to injure a living person’s reputation

Link copied to clipboard

Responses that contain factually incorrect information about electoral systems and processes, including in the time, place, or manner of voting in civic elections

Link copied to clipboard

Represents the "Harassment" moderation category.

Link copied to clipboard

Represents the category of moderation specifically focused on identifying content that involves harassment with a threatening nature.

Link copied to clipboard

Represents content categorized as hate speech or related material.

Link copied to clipboard

Represents the HATE_THREATENING moderation category.

Link copied to clipboard

Represents the moderation category for content that may involve illegal or illicit activities. This category is used to identify content that violates legal frameworks or ethical guidelines.

Link copied to clipboard

Represents content classified as both illicit and violent in nature.

Link copied to clipboard

Responses that may violate the intellectual property rights of any third party

Link copied to clipboard

Represents a predefined moderation category for cases associated with misconduct.

Link copied to clipboard

Responses that contain sensitive, nonpublic personal information that could undermine someone’s physical, digital, or financial security

Link copied to clipboard

Represents a specific moderation category for identifying and handling potential prompt attacks.

Link copied to clipboard

Represents the "SELF_HARM" moderation category. This category is used to identify content that pertains to self-harm or related behavior.

Link copied to clipboard

Represents the moderation category for instructions or content that encourages or promotes self-harm.

Link copied to clipboard

Represents content that explicitly indicates an intent of self-harm.

Link copied to clipboard

Represents content categorized as sexual in nature.

Link copied to clipboard

Represents content related to sexual material involving minors.

Link copied to clipboard

Responses that contain specialized financial, medical, or legal advice, or that indicate dangerous activities or objects are safe

Link copied to clipboard

Represents the category of content classified as violent behavior or actions.

Link copied to clipboard

Represents the VIOLENCE_GRAPHIC moderation category.

Properties

Link copied to clipboard

Functions

Link copied to clipboard
open operator override fun equals(other: Any?): Boolean

Compares this object with another for equality. Two instances of ModerationCategory are equal if their name is equal.