Privacy & Data: Your AI, Your Rules

Because your prompts aren’t anyone else’s training data Building with AI means your data flows through multiple systems—AnyAPI, model providers, and everything in between. Here’s exactly what happens to your information, who sees what, and how to lock it down tight.

🛡️ The AnyAPI Promise

We’re the pipeline, not the vault The TL;DR: AnyAPI doesn’t store your prompts or responses. Period. The exception: If you explicitly opt into prompt logging in your account settings, we’ll keep what you tell us to keep. Your choice, your control. What we do track:
  • 📊 Metadata only (tokens used, response time, model called)
  • 🎯 Anonymous sampling for model ranking (completely disconnected from your account)
  • 📈 Usage stats to power your activity dashboard
What we never do:
  • 👀 Read your conversations for fun
  • 🏪 Sell your data to third parties
  • 🤖 Train our own models on your prompts
  • 📱 Associate anonymous samples with your identity

🏢 Provider Policies: The Real Talk

Each AI provider plays by different rules Every model provider on AnyAPI has their own data policies. Some are strict, others… less so. Here’s how to navigate the landscape:

🎓 Training Data Policies

The reality check: Some providers use your prompts to improve their models. Others promise they won’t. Your protection: AnyAPI gives you granular control:
{
  "model": "anthropic/claude-3.5-sonnet",
  "data_policy": {
    "training_allowed": false,  // Block providers that train on data
    "retention_days": 30        // Limit how long they can keep it
  },
  "messages": [...]
}
Account-wide shields: Set your preferences once in privacy settings:
  • Paid models only (generally have stricter policies)
  • 🚫 Block training-enabled providers entirely
  • Set maximum retention periods you’re comfortable with

🕐 Data Retention Reality

The spectrum:
  • Zero retention: Some providers delete everything immediately (rare but awesome)
  • Short term: 30-90 days for abuse monitoring (common)
  • Compliance based: Longer periods for legal/regulatory reasons
  • Indefinite: Some free models keep data forever (buyer beware)
Your move: Check each provider’s retention policy before sending sensitive data. AnyAPI shows this info upfront—no hunting through legal docs required.

⚙️ Privacy Controls That Actually Work

Request-Level Protection

Lock down individual requests when handling sensitive data:
{
  "model": "anyapi/auto",
  "providers": {
    "require": ["zero-retention", "no-training"],
    "exclude": ["high-risk-provider"]
  },
  "messages": [
    {
      "role": "user", 
      "content": "Analyze this confidential financial report..."
    }
  ]
}

Account-Level Defaults

Set it once, protect everything:
  • 🔒 Training opt-out across all providers
  • ⏱️ Maximum retention periods you’ll accept
  • 💰 Paid-only routing for stricter data policies
  • 🚫 Provider blocklists for your specific requirements

Team Management

Enterprise controls for organizations:
  • 👥 Team-wide policies that override individual settings
  • 📋 Audit logs showing exactly where data went
  • 🎯 Role-based restrictions on sensitive providers
  • 📊 Compliance reporting for your security team

🎯 Smart Privacy Strategies

The Layered Defense

  • Layer 1: AnyAPI doesn’t log by default
  • Layer 2: Provider filtering based on your policies
  • Layer 3: Request-level overrides for ultra-sensitive data
  • Layer 4: Data minimization in your prompts

Development vs. Production

// Development: Looser for faster iteration
const devConfig = {
  model: "@preset/development",
  training_allowed: true  // Cheaper models, faster responses
};

// Production: Locked down tight
const prodConfig = {
  model: "@preset/production", 
  data_policy: {
    training_allowed: false,
    retention_days: 0,
    compliance: "SOC2"
  }
};

Data Classification

Public data: Use any provider, optimize for cost and speed Internal data: Stick to no-training providers with short retention Confidential data: Zero-retention providers only, audit everything Regulated data: Compliance-certified providers with full paper trails

📚 The Fine Print (Made Simple)

Full legal details: Check our privacy policy and terms of service Provider specifics: Each model’s data policy is shown in real-time—no guessing required Questions or concerns: Contact our privacy team directly through the dashboard Remember: Privacy isn’t a checkbox—it’s an ongoing practice. Review your settings regularly, especially as new providers join the platform.
Your data, your rules, your peace of mind. Configure AnyAPI privacy settings that actually protect what matters.