Skip to main content

Never Pay for Nothing

Automatic protection from failed AI requests Ever get charged for an AI request that returned absolutely nothing? Those moments when the model crashes, times out, or returns an empty response on your dime? Zero Completion Insurance has your back — automatically, completely, forever.

The Problem We Solved

The old way:
  • Model overloaded? That’ll be $0.15 please.
  • Provider timeout? Here’s your bill for zero tokens.
  • Empty response? Thanks for the donation!
The AnyAPI way:
  • Failed request? $0.00
  • Empty response? $0.00
  • Provider issues? $0.00
  • Zero tokens delivered? $0.00
Zero exceptions. Zero configuration. Zero tolerance for charging you for nothing.

How It Works

Zero Completion Insurance is always on. No toggles, no settings, no premium plans. Every single request is automatically protected from day one. The system watches for:
  • Empty responses — Zero completion tokens = $0 charge
  • Error responses — Provider errors = $0 charge
  • Failed requests — Crashes, timeouts, etc. = $0 charge
  • Model failures — Overloaded, unavailable = $0 charge

What’s Protected (And What Isn’t)

Fully Protected

  • Completion token costs — Zero tokens = zero charges, period
  • Failed API calls — System errors don’t cost you anything
  • Empty responses — Blank outputs are free, as they should be
  • Provider issues — Their problems aren’t your financial problems

Partial Protection

  • Input token processing — You may pay for prompt processing even if completion fails
  • Successful requests — Real responses with actual content are charged normally
  • Additional features — Web search, file processing, etc. still apply when used

The Logic

You pay for value delivered. No value = no payment. It’s that simple.

Working with Protected Requests


### Retry Pattern with Cost Protection

```python Python
import time
import requests

def resilient_request(payload, max_retries=3):
    """Make requests with automatic retry — failures are free"""

    for attempt in range(max_retries):
        try:
            response = requests.post(
                "https://api.anyapi.ai/v1/chat/completions",
                headers={
                    "Authorization": f"Bearer {API_KEY}",
                    "Content-Type": "application/json"
                },
                json=payload
            )

            result = response.json()
            content = result.get("choices", [{}])[0].get("message", {}).get("content")
            completion_tokens = result.get("usage", {}).get("completion_tokens", 0)

            if content and completion_tokens > 0:
                return {"success": True, "content": content}

            # Empty response — protected by insurance, retry
            print(f"Attempt {attempt + 1}: Empty response, retrying...")

        except Exception as e:
            print(f"Attempt {attempt + 1}: Failed with {e}")

        if attempt < max_retries - 1:
            time.sleep(2 ** attempt)

    return {"success": False, "message": "All attempts failed — no charges for failures"}

result = resilient_request({
    "model": "openai/gpt-5",
    "messages": [{"role": "user", "content": "Explain quantum computing"}]
})

Fallback Model Chain

Python
import requests

def request_with_fallback(prompt):
    """Try multiple models — only pay for the one that succeeds"""

    models = [
        "openai/gpt-5",
        "anthropic/claude-sonnet-4",
        "google/gemini-2.5-flash"
    ]

    for model in models:
        try:
            response = requests.post(
                "https://api.anyapi.ai/v1/chat/completions",
                headers={
                    "Authorization": f"Bearer {API_KEY}",
                    "Content-Type": "application/json"
                },
                json={
                    "model": model,
                    "messages": [{"role": "user", "content": prompt}]
                }
            )

            result = response.json()
            content = result.get("choices", [{}])[0].get("message", {}).get("content")

            print(f"{model} returned empty — trying next (no charge)")

        except Exception:
            print(f"{model} failed — trying next (no charge)")

    return None

Pro Tips

  • Use retry logic — Insurance covers failures, retries often succeed
  • Use fallback model chains — Increase success rates while staying protected
  • Monitor failure patterns — High failure rates might indicate prompt or model issues
  • Implement exponential backoff — Give overloaded models time to recover
  • Don’t fear experimentation — Try demanding prompts without worrying about failure costs

Never pay for nothing again. Zero Completion Insurance: always on, always protecting, always free.