Skip to main content

Supported SDKs

AnyAPI provides official SDKs and supports popular community libraries for seamless integration across different programming languages and frameworks.

Official SDKs

Python SDK

Our official Python SDK provides a convenient interface for all AnyAPI endpoints:
pip install anyapi
from anyapi import AnyAPI

client = AnyAPI(api_key="your-api-key")

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

JavaScript/TypeScript SDK

Official SDK for Node.js and browser environments:
npm install anyapi
import AnyAPI from 'anyapi';

const anyapi = new AnyAPI({
  apiKey: 'your-api-key'
});

const response = await anyapi.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'user', content: 'Hello!' }
  ]
});

console.log(response.choices[0].message.content);

OpenAI-Compatible Libraries

AnyAPI is fully compatible with OpenAI SDKs and libraries. Simply change the base URL:

OpenAI Python

from openai import OpenAI

client = OpenAI(
    api_key="your-anyapi-key",
    base_url="https://api.anyapi.ai/v1"
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

OpenAI Node.js

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'your-anyapi-key',
  baseURL: 'https://api.anyapi.ai/v1'
});

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'user', content: 'Hello!' }
  ]
});

Community SDKs

Go

package main

import (
    "context"
    "fmt"
    "github.com/sashabaranov/go-openai"
)

func main() {
    config := openai.DefaultConfig("your-anyapi-key")
    config.BaseURL = "https://api.anyapi.ai/v1"
    
    client := openai.NewClientWithConfig(config)
    
    resp, err := client.CreateChatCompletion(
        context.Background(),
        openai.ChatCompletionRequest{
            Model: openai.GPT4,
            Messages: []openai.ChatCompletionMessage{
                {
                    Role:    openai.ChatMessageRoleUser,
                    Content: "Hello!",
                },
            },
        },
    )
    
    if err != nil {
        fmt.Printf("Error: %v\n", err)
        return
    }
    
    fmt.Println(resp.Choices[0].Message.Content)
}

PHP

<?php
require_once 'vendor/autoload.php';

use OpenAI\Client;

$client = OpenAI::factory()
    ->withApiKey('your-anyapi-key')
    ->withBaseUri('https://api.anyapi.ai/v1/')
    ->make();

$response = $client->chat()->create([
    'model' => 'gpt-4o',
    'messages' => [
        ['role' => 'user', 'content' => 'Hello!'],
    ],
]);

echo $response->choices[0]->message->content;
?>

Ruby

require 'ruby/openai'

client = OpenAI::Client.new(
  access_token: 'your-anyapi-key',
  uri_base: 'https://api.anyapi.ai'
)

response = client.chat(
  parameters: {
    model: 'gpt-4o',
    messages: [
      { role: 'user', content: 'Hello!' }
    ]
  }
)

puts response.dig('choices', 0, 'message', 'content')

C#

using OpenAI.GPT3;
using OpenAI.GPT3.Managers;
using OpenAI.GPT3.ObjectModels.RequestModels;

var openAiService = new OpenAIService(new OpenAiOptions()
{
    ApiKey = "your-anyapi-key",
    BaseDomain = "https://api.anyapi.ai"
});

var completionResult = await openAiService.ChatCompletion.CreateCompletion(
    new ChatCompletionCreateRequest()
    {
        Messages = new List<ChatMessage>
        {
            ChatMessage.FromUser("Hello!")
        },
        Model = Models.Gpt_4
    });

if (completionResult.Successful)
{
    Console.WriteLine(completionResult.Choices.First().Message.Content);
}

Framework Integrations

LangChain

from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI

# For completion models
llm = OpenAI(
    openai_api_key="your-anyapi-key",
    openai_api_base="https://api.anyapi.ai/v1"
)

# For chat models
chat = ChatOpenAI(
    openai_api_key="your-anyapi-key",
    openai_api_base="https://api.anyapi.ai/v1",
    model_name="gpt-4o"
)

LlamaIndex

from llama_index.llms import OpenAI

llm = OpenAI(
    api_key="your-anyapi-key",
    api_base="https://api.anyapi.ai/v1",
    model="gpt-4o"
)

SDK Features

All AnyAPI SDKs support:
  • Streaming responses for real-time output
  • Async/await for non-blocking operations
  • Type safety with TypeScript definitions
  • Error handling with detailed error messages
  • Retry logic for robust API calls
  • Request timeout configuration
  • Proxy support for enterprise environments

Getting Help

Contributing

We welcome contributions to our SDKs! Check out our:
I