Web Search

Your AI models are smart, but they’re living in the past. AnyAPI’s web search feature fixes that by giving them access to real-time web data, so they can answer questions about what’s happening right now, not just what they learned in training.

What This Unlocks

Turn any AI model into a research powerhouse:
  • Live web data - No more “I don’t know about recent events”
  • Current information - Break free from training cutoff dates
  • Automatic integration - Search results blend seamlessly into responses
  • Smart configuration - Control exactly how much web data you want
We use next-gen search tech that combines keyword matching with AI embeddings to find the most relevant stuff on the web.

Two Ways to Get Started

The Easy Way: Just Add :online

Slap :online on any model name and boom—instant web search:
import requests

url = "https://api.anyapi.ai/api/v1/chat/completions"
headers = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json"
}

payload = {
    "model": "openai/gpt-4o:online",  # This one simple trick...
    "messages": [
        {
            "role": "user", 
            "content": "What are the latest developments in AI research this week?"
        }
    ]
}

response = requests.post(url, headers=headers, json=payload)
result = response.json()
print(result["choices"][0]["message"]["content"])
# Get answers about stuff that happened yesterday

The Power User Way: Web Plugin

Want more control? Use the web plugin:
{
  "model": "anthropic/claude-3-5-sonnet-20241022",
  "messages": [
    {
      "role": "user",
      "content": "What's happening in the stock market today?"
    }
  ],
  "plugins": [
    {
      "id": "web",
      "max_results": 5,
      "search_prompt": "Current stock market news and trends"
    }
  ]
}

Dialing It In

Keep It Simple

payload = {
    "model": "openai/gpt-4o",
    "messages": [
        {"role": "user", "content": "Latest climate change research findings"}
    ],
    "plugins": [
        {
            "id": "web",
            "max_results": 10,        # More results = better info (default: 5)
            "search_context": "high"  # How much detail to include
        }
    ]
}

Go Full Control Freak

payload = {
    "model": "anthropic/claude-3-5-sonnet-20241022",
    "messages": [
        {"role": "user", "content": "Current cryptocurrency market analysis"}
    ],
    "plugins": [
        {
            "id": "web",
            "max_results": 8,
            "search_prompt": "Cryptocurrency market analysis Bitcoin Ethereum trends",
            "search_context": "medium",
            "domains": ["coindesk.com", "cointelegraph.com"],  # Only the good stuff
            "exclude_domains": ["reddit.com", "twitter.com"]   # Skip the noise
        }
    ]
}

Context Levels That Make Sense

Low Context: Quick & Cheap

  • Minimal info from search results
  • Fast responses with fewer tokens
  • Budget-friendly for simple questions
  • Perfect for: Quick fact-checks, yes/no answers

Medium Context: The Sweet Spot

  • Key information without the fluff
  • Balanced speed and detail
  • Fair pricing for most use cases
  • Perfect for: News updates, general research

High Context: Go Deep

  • Everything we found that matters
  • Comprehensive data for complex questions
  • Higher cost but worth it for research
  • Perfect for: Deep analysis, academic research

What It Costs

Simple pricing based on how much you search:
  • Base rate: $4 per 1,000 search results
  • Typical request: 5 results ≈ $0.02
  • Plus model costs: Standard token pricing for processing results

Real Cost Examples

# What you'll actually pay
pricing_examples = [
    {"results": 5, "cost": "$0.02"},   # Default - perfect for most stuff
    {"results": 10, "cost": "$0.04"},  # More thorough research
    {"results": 20, "cost": "$0.08"}   # Go all-out comprehensive
]

# Don't forget model costs for processing the search results
model_processing = {
    "gpt-4o": "$0.005 per 1k tokens",
    "claude-3-5-sonnet": "$0.003 per 1k tokens"
}

What You Get Back

Search results get woven right into the AI’s response, with handy metadata:
{
  "choices": [
    {
      "message": {
        "content": "Based on recent web search results, here are the latest AI developments...\n\n[Search results were used from: example.com, news-site.com]",
        "search_metadata": {
          "query_used": "latest AI research developments 2024",
          "results_count": 5,
          "sources": [
            {
              "url": "https://example.com/ai-news",
              "title": "Latest AI Research Breakthroughs",
              "snippet": "Recent developments in..."
            }
          ]
        }
      }
    }
  ]
}

Real-World Use Cases

Breaking News Junkie

payload = {
    "model": "openai/gpt-4o:online",
    "messages": [
        {"role": "user", "content": "What are the top news stories today?"}
    ]
}
# Get today's headlines, not last year's training data

Market Research Ninja

payload = {
    "model": "anthropic/claude-3-5-sonnet-20241022",
    "messages": [
        {"role": "user", "content": "Analyze the current trends in renewable energy investments"}
    ],
    "plugins": [
        {
            "id": "web",
            "max_results": 15,
            "search_context": "high",
            "domains": ["bloomberg.com", "reuters.com", "energycentral.com"]
        }
    ]
}
# Skip the fluff, get data from sources that matter

Tech Documentation Detective

payload = {
    "model": "openai/gpt-4o",
    "messages": [
        {"role": "user", "content": "How do I implement OAuth2 in FastAPI with the latest version?"}
    ],
    "plugins": [
        {
            "id": "web",
            "search_prompt": "FastAPI OAuth2 implementation tutorial 2024",
            "max_results": 8
        }
    ]
}
# Get current tutorials, not outdated Stack Overflow posts

Academic Research Assistant

payload = {
    "model": "anthropic/claude-3-5-sonnet-20241022:online",
    "messages": [
        {"role": "user", "content": "Recent peer-reviewed studies on machine learning interpretability"}
    ]
}
# Find the latest research papers and findings

Pro Tips for Better Results

  1. Be Specific: “Tesla stock price today” beats “stock market”
  2. Match Context Level: Quick questions = low context, research = high context
  3. Set Smart Limits: More results = better info, but costs more
  4. Filter Domains: Stick to quality sources, skip the junk
  5. Cache When Possible: Don’t search for the same thing twice

When Things Go Wrong

import requests

def bulletproof_search_request(messages, model="gpt-4o:online", max_retries=3):
    url = "https://api.anyapi.ai/api/v1/chat/completions"
    headers = {
        "Authorization": f"Bearer {API_KEY}",
        "Content-Type": "application/json"
    }
    
    payload = {
        "model": model,
        "messages": messages
    }
    
    try:
        response = requests.post(url, headers=headers, json=payload)
        response.raise_for_status()
        
        result = response.json()
        
        # See what we found
        if "search_metadata" in result["choices"][0]["message"]:
            results_count = result['choices'][0]['message']['search_metadata']['results_count']
            print(f"Found {results_count} web results to work with")
        
        return result
        
    except requests.exceptions.HTTPError as e:
        if e.response.status_code == 503:
            print("Web search is taking a coffee break. Falling back to regular model...")
            # Graceful degradation
            payload["model"] = model.replace(":online", "")
            response = requests.post(url, headers=headers, json=payload)
            return response.json()
        else:
            raise e

The Fine Print

A few things to keep in mind:
  • Regional differences: Search availability varies by location
  • Rate limits: Don’t hammer the search too hard
  • Content filtering: Some queries might get blocked
  • Slight delays: Web search takes a moment longer
  • Cost scaling: More results = higher bills

Works With Everything

Web search plays nice with all the popular models:
  • OpenAI: GPT-4, GPT-3.5 Turbo, the whole gang
  • Anthropic: All the Claude 3 variants
  • Google: Gemini models
  • Open source: Llama, Mistral, and the rest
Pick whatever model fits your needs—they all handle web search context like champs.
Ready to give your AI models internet access? Web search turns any model into a research assistant that knows what happened five minutes ago, not just what it learned in training.