Skip to main content

Text Models Overview

AnyAPI provides access to the most advanced language models from leading AI providers. Generate human-like text, engage in conversations, write code, and solve complex problems.

Available Models

OpenAI Models

  • GPT-5.4: Most capable flagship model with advanced reasoning and agentic capabilities
  • GPT-5.2: High-performance model for knowledge work and complex projects
  • GPT-4o: Reliable multimodal model for general-purpose tasks
  • GPT-4o mini: Fast and cost-effective for lightweight tasks

Anthropic Models

  • Claude Opus 4.6: Most powerful model for deep reasoning and complex multi-step tasks (1M context)
  • Claude Sonnet 4.6: Balanced performance for coding, design, and knowledge work
  • Claude Haiku 4.5: Fast and cost-effective for high-volume requests

Google Models

  • Gemini 3.1 Pro: Advanced reasoning with double the performance of previous generation
  • Gemini 3.1 Flash Lite: Ultra-fast and cost-efficient for high-volume workloads
  • Gemini 2.5 Pro: Multimodal with 2M token context window

Meta Models

  • Llama 4 Maverick: Best-in-class open-source multimodal model (128 experts)
  • Llama 4 Scout: Efficient model with industry-leading 10M token context window
  • Llama 3.3: Proven open-source model for custom deployments

DeepSeek Models

  • DeepSeek V3: High-performance open-source model for coding and conversation
  • DeepSeek R1: Advanced reasoning capabilities on par with top proprietary models

Model Capabilities

Chat Completion

Natural conversations and Q&A

Text Generation

Creative writing and content creation

Code Generation

Programming assistance and debugging

Analysis

Text analysis and summarization

Chat Completions API

The primary endpoint for text generation:
POST /v1/chat/completions

Basic Example

curl -X POST "https://api.anyapi.ai/v1/chat/completions" \
  -H "Authorization: Bearer your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "user", "content": "Explain quantum computing"}
    ]
  }'

Message Roles

  • system: Sets the behavior and context for the assistant
  • user: Messages from the human user
  • assistant: Previous responses from the AI model
{
  "messages": [
    {"role": "system", "content": "You are a helpful coding assistant."},
    {"role": "user", "content": "Write a Python function to sort a list"},
    {"role": "assistant", "content": "Here's a Python function to sort a list..."},
    {"role": "user", "content": "Now make it work with custom comparisons"}
  ]
}

Advanced Features

Streaming Responses

Get real-time responses as they’re generated:
const response = await fetch('/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer your_api_key',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: 'gpt-4o',
    messages: [{role: 'user', content: 'Tell me a story'}],
    stream: true
  })
});

const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const {value, done} = await reader.read();
  if (done) break;
  
  const chunk = decoder.decode(value);
  console.log(chunk);
}

Function Calling

Enable models to call external functions:
{
  "model": "gpt-4o",
  "messages": [
    {"role": "user", "content": "What's the weather in San Francisco?"}
  ],
  "functions": [
    {
      "name": "get_weather",
      "description": "Get the current weather in a location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "City and state, e.g. San Francisco, CA"
          }
        },
        "required": ["location"]
      }
    }
  ]
}

JSON Mode

Force the model to return valid JSON:
{
  "model": "gpt-4o",
  "messages": [
    {"role": "user", "content": "Extract the key information from this text as JSON"}
  ],
  "response_format": {"type": "json_object"}
}

Model Comparison

ModelContext WindowStrengthsBest For
GPT-5.4128KReasoning, agentic capabilitiesComplex tasks, enterprise workflows
Claude Opus 4.61MDeep reasoning, long-horizon tasksMulti-step analysis, coding
Gemini 3.1 Pro2MLong context, reasoningDocument analysis, research
Llama 4 Maverick1MOpen source, multimodalCustom deployments, self-hosting
DeepSeek V3128KOpen source, codingCost-effective development

Best Practices

Performance Optimization

  • Use system messages to set context once
  • Keep conversations focused and relevant
  • Choose the right model for your use case
  • Implement proper error handling

Cost Optimization

  • Use smaller models for simple tasks
  • Implement response caching
  • Set reasonable max_tokens limits
  • Monitor usage with our dashboard

Quality Improvement

  • Provide clear, specific instructions
  • Use examples in your prompts
  • Iterate on prompt design
  • Test with different models

Common Use Cases

  • Blog posts and articles
  • Marketing copy
  • Product descriptions
  • Social media content
  • Code generation
  • Bug fixing
  • Code review
  • Documentation
  • Text summarization
  • Sentiment analysis
  • Data extraction
  • Research assistance
  • Chatbots
  • FAQ responses
  • Ticket routing
  • Response generation

Getting Started

Quick Start

Set up your API key and make your first request

SDKs

Use our official libraries

Examples

See practical examples

Function Calling

Learn about tool integration