The moderations endpoint is a tool you can use to check whether content complies with an LLM Providers policies
cURL
curl --request POST \ --url https://api.anyapi.ai/v1/moderations \ --header 'Authorization: Bearer <token>' \ --header 'Content-Type: application/json' \ --data ' { "input": "Sample text goes here", "model": "text-moderation-stable" } '
{ "id": "<string>", "model": "<string>", "results": [ { "flagged": true, "categories": {}, "category_scores": {} } ] }
Bearer token authentication. Get your API key from the dashboard.
The text to check for moderation
"Sample text goes here"
The moderation model to use
"text-moderation-stable"
Successful response
Response identifier
Model used for moderation
Show child attributes
Whether the content was flagged
Categories that were checked
Scores for each category