Documentation Index
Fetch the complete documentation index at: https://docs.anyapi.ai/llms.txt
Use this file to discover all available pages before exploring further.
APIs break. Networks fail. Models go offline. It’s not if you’ll hit errors—it’s when. AnyAPI uses standard HTTP status codes and gives you structured error responses so you can handle failures gracefully instead of letting your app crash and burn.
What Error Responses Look Like
Every error follows the same format, so you always know what you’re dealing with:
{
"error": {
"code": 400,
"message": "Invalid request format",
"type": "bad_request",
"metadata": {
"field": "messages",
"reason": "Required field missing"
}
}
}
Breaking It Down
- code: The HTTP status code (because standards matter)
- message: What actually went wrong, in human speak
- type: Error category for your code to handle programmatically
- metadata: Extra details when you need to dig deeper (optional)
The Status Code Cheat Sheet
2xx: Everything’s Cool
200 OK
Your request worked perfectly. Pat yourself on the back.
{
"id": "gen-abc123",
"object": "chat.completion",
"choices": [...]
}
4xx: You Did Something Wrong (Don’t Panic)
400 Bad Request
Your request was malformed or had invalid parameters.
{
"error": {
"code": 400,
"message": "Invalid parameter 'temperature'. Must be between 0 and 2.",
"type": "bad_request",
"metadata": {
"parameter": "temperature",
"value": 3.5,
"valid_range": [0, 2]
}
}
}
Why this happens:
- Forgot required parameters
- Parameter values are out of bounds
- Your JSON is wonky
- Model name doesn’t exist
401 Unauthorized
Your API key is wrong, missing, or revoked.
{
"error": {
"code": 401,
"message": "Invalid API key provided",
"type": "authentication_error"
}
}
Fix it:
- Double-check your API key isn’t typo’d
- Make sure you didn’t accidentally commit a revoked key
- Verify the Authorization header format is correct
402 Payment Required
You’re out of credits. Time to top up.
{
"error": {
"code": 402,
"message": "Your account or API key has insufficient anytokens. Add more anytokens and retry the request.",
"type": "budget_exceeded",
"param": null,
}
}
Fix it:
- Add more credits to your account
- Switch to a cheaper model
- Reduce max_tokens to lower costs
403 Forbidden
Content moderation blocked your request.
{
"error": {
"code": 403,
"message": "Your request was flagged by content moderation",
"type": "moderation_error",
"metadata": {
"reasons": ["violence", "hate"],
"flagged_input": "the harmful content...",
"provider_name": "openai",
"model_slug": "gpt-4o"
}
}
}
Fix it:
- Rephrase your prompt to remove problematic content
- Try a different model with different moderation rules
- Contact support if this seems like a false positive
408 Request Timeout
Your request took too long and we gave up.
{
"error": {
"code": 408,
"message": "Request timed out after 120 seconds",
"type": "timeout_error",
"metadata": {
"timeout_seconds": 120
}
}
}
Fix it:
- Lower your max_tokens setting
- Simplify your prompt
- Try again (sometimes it just works the second time)
- Use streaming to get partial results
422 Unprocessable Entity
Your request is valid but the model can’t handle what you’re asking.
{
"error": {
"code": 422,
"message": "Model does not support the requested feature",
"type": "unsupported_feature_error",
"metadata": {
"feature": "function_calling",
"model": "some/model-name"
}
}
}
429 Too Many Requests
Slow down there, speed racer. You’re hitting the rate limit.
{
"error": {
"code": 429,
"message": "Rate limit exceeded. Try again in 60 seconds.",
"type": "rate_limit_error",
"metadata": {
"retry_after": 60,
"limit_type": "requests_per_minute",
"current_usage": 1000,
"limit": 1000
}
}
}
Fix it:
- Wait for the retry_after time (we tell you exactly how long)
- Implement exponential backoff in your code
- Upgrade to a higher tier for more requests
- Spread your requests over time
5xx: We Screwed Up (Sorry)
500 Internal Server Error
Something went wrong on our end.
{
"error": {
"code": 500,
"message": "Internal server error occurred",
"type": "server_error",
"metadata": {
"request_id": "req-abc123"
}
}
}
502 Bad Gateway
The AI model you requested is having a bad day.
{
"error": {
"code": 502,
"message": "Model is temporarily unavailable",
"type": "model_unavailable_error",
"metadata": {
"model": "openai/gpt-4o",
"provider_name": "openai",
"estimated_retry_after": 300
}
}
}
Fix it:
- Try a different model
- Wait a few minutes and try again
- Use our fallback models feature (it handles this automatically)
503 Service Unavailable
No providers are available for your requested model.
{
"error": {
"code": 503,
"message": "No available providers for this model",
"type": "no_providers_error",
"metadata": {
"model": "custom/fine-tuned-model",
"checked_providers": ["provider1", "provider2"]
}
}
}
Smart Error Handling Patterns
1. Exponential Backoff (The Right Way)
import time
import random
def make_bulletproof_request(request_func, max_retries=5):
"""Make requests that don't give up too easily"""
for attempt in range(max_retries):
try:
response = request_func()
if response.status_code == 200:
return response.json()
elif response.status_code == 429:
# Rate limited - respect the retry_after
retry_after = response.json().get('error', {}).get('metadata', {}).get('retry_after', 60)
print(f"Rate limited. Chilling for {retry_after} seconds...")
time.sleep(retry_after + random.uniform(1, 5))
elif response.status_code >= 500:
# Server error - back off exponentially
wait_time = (2 ** attempt) + random.uniform(0, 1)
print(f"Server error. Waiting {wait_time:.1f}s before retry {attempt + 1}")
time.sleep(wait_time)
else:
# Client error - don't retry, just fail
raise Exception(f"API error: {response.json()}")
except Exception as e:
if attempt == max_retries - 1:
raise e
time.sleep(2 ** attempt)
2. Handle Specific Errors Like a Pro
async function handleApiResponse(response) {
const data = await response.json();
if (!response.ok) {
const error = data.error;
switch (error.code) {
case 400:
throw new ValidationError(error.message, error.metadata);
case 401:
throw new AuthenticationError("Check your API key, buddy");
case 402:
throw new InsufficientCreditsError("Time to add more credits!", error.metadata);
case 403:
throw new ModerationError("Content got flagged", error.metadata);
case 429:
const retryAfter = error.metadata?.retry_after || 60;
throw new RateLimitError(`Slow down! Try again in ${retryAfter}s`, retryAfter);
case 502:
throw new ModelUnavailableError("Model is taking a nap", error.metadata);
default:
throw new ApiError(`Something went wrong: ${error.message}`, error.code);
}
}
return data;
}
3. Smart Fallback Models
async def generate_with_fallback(prompt, models=None):
"""Try multiple models until one works"""
if models is None:
models = [
"openai/gpt-4o", # First choice
"anthropic/claude-3-sonnet", # Solid backup
"google/gemini-pro" # Emergency option
]
for i, model in enumerate(models):
try:
response = await client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}]
)
if i > 0:
print(f"Fell back to {model} after {i} failures")
return response
except Exception as e:
if e.status_code in [502, 503]:
# Model unavailable, try next one
print(f"{model} is down, trying next fallback...")
continue
else:
# Other error, don't bother with fallbacks
raise e
raise Exception("All fallback models failed. The AI apocalypse has begun.")
Provider-Specific Errors
Sometimes you get the raw error from the AI provider:
{
"error": {
"code": 400,
"message": "Invalid request to provider",
"type": "provider_error",
"metadata": {
"provider_name": "openai",
"raw": {
"error": {
"message": "The model `gpt-5` does not exist",
"type": "invalid_request_error",
"param": "model",
"code": "model_not_found"
}
}
}
}
}
Logging & Monitoring (Because Debugging Sucks)
Track Request IDs
Every error includes a request ID. Use it to track down issues:
import logging
def log_api_error(response):
"""Log errors with all the juicy details"""
error_data = response.json()
request_id = error_data.get('error', {}).get('metadata', {}).get('request_id', 'unknown')
logging.error(
f"API Error {response.status_code}: {error_data['error']['message']} "
f"(Request ID: {request_id}) - Now you can actually debug this!"
)
Monitor Error Patterns
const errorStats = {
rateLimits: 0,
modelDown: 0,
authFails: 0,
clientErrors: 0,
serverErrors: 0
};
function trackError(errorCode) {
// Keep tabs on what's breaking
if (errorCode === 429) errorStats.rateLimits++;
else if (errorCode === 502) errorStats.modelDown++;
else if (errorCode === 401) errorStats.authFails++;
else if (errorCode >= 400 && errorCode < 500) errorStats.clientErrors++;
else if (errorCode >= 500) errorStats.serverErrors++;
// Alert when things get weird
if (errorStats.rateLimits > 100) {
alert("You're hitting rate limits a lot. Maybe upgrade your plan?");
}
}
Testing Your Error Handling
def test_error_handling():
"""Make sure your error handling actually works"""
# Test bad auth
try:
client = AnyAPIClient(api_key="totally-fake-key")
response = client.chat.completions.create(...)
assert False, "This should have failed!"
except AuthenticationError:
print("✅ Auth error handling works")
# Test rate limiting (carefully!)
try:
for i in range(1000): # Don't actually do this
response = client.chat.completions.create(...)
except RateLimitError as e:
print(f"✅ Rate limit handling works (retry after {e.retry_after}s)")
Pro Tips for Error Handling
1. Graceful Degradation
Always have a Plan B when the API is down:
def generate_text(prompt):
try:
return anyapi_generate(prompt)
except ApiError:
# Fall back to cached responses, simpler logic, or user notification
return "Sorry, AI is taking a break. Try again in a few minutes!"
2. User-Friendly Error Messages
Don’t show users raw error codes:
def user_friendly_error(error):
"""Turn technical errors into messages humans can understand"""
friendly_messages = {
401: "Looks like there's an issue with your account. Please contact support.",
402: "You've reached your usage limit. Time to upgrade! 🚀",
403: "That request contains content we can't process. Try rephrasing it.",
429: "Whoa there! You're making requests too quickly. Take a short break.",
500: "Our servers are having a moment. Please try again in a bit.",
502: "The AI model is temporarily unavailable. We're on it!"
}
return friendly_messages.get(error.code, "Something unexpected happened. Our team has been notified.")
3. Smart Timeouts
Set timeouts that match your use case:
# Real-time chat: fail fast
timeout = 10 # seconds
# Batch processing: be patient
timeout = 300 # seconds
# Long-form content generation: somewhere in between
timeout = 60 # seconds
4. Circuit Breaker Pattern
Stop hammering broken services:
class CircuitBreaker:
"""Protect your app from repeatedly calling broken services"""
def __init__(self, failure_threshold=5, timeout=60):
self.failure_threshold = failure_threshold
self.timeout = timeout
self.failure_count = 0
self.last_failure_time = None
self.state = 'CLOSED' # CLOSED (normal), OPEN (broken), HALF_OPEN (testing)
def call(self, func):
if self.state == 'OPEN':
if time.time() - self.last_failure_time > self.timeout:
self.state = 'HALF_OPEN'
print("Circuit breaker: Attempting to reconnect...")
else:
raise CircuitOpenError("Service is still broken, not trying again yet")
try:
result = func()
self.on_success()
return result
except Exception as e:
self.on_failure()
raise e
def on_success(self):
self.failure_count = 0
self.state = 'CLOSED'
def on_failure(self):
self.failure_count += 1
self.last_failure_time = time.time()
if self.failure_count >= self.failure_threshold:
self.state = 'OPEN'
print(f"Circuit breaker: Service marked as broken after {self.failure_count} failures")
Monitor These Metrics
Set up alerts for:
- Error rate spikes - Something’s going wrong
- Specific error patterns - Identify systemic issues
- Model availability - Know when your preferred models are down
- Rate limit proximity - Upgrade before you hit the wall
Remember: Good error handling is invisible to users but saves your sanity. Handle errors gracefully, log everything you need for debugging, and always have a fallback plan.