OneLLM Documentation

Python Version Status License Test Coverage   Contributions Welcome

A “drop-in” replacement for OpenAI’s client that offers a unified interface for interacting with large language models from various providers, with support for hundreds of models, built-in fallback mechanisms, and enhanced reliability features.


This project was created by Ran Aroussi, and is released under the Apache 2.0 license.

Support this project by starring it on GitHub >

More stars → more visibility → more contributors → better features → more robust tool for everyone 🎉


Welcome to the OneLLM documentation! OneLLM is a unified interface for 300+ LLMs across 18+ providers, designed as a drop-in replacement for the OpenAI Python client.

🚀 Get Started

🌟 Key Features

  • Drop-in OpenAI Replacement: Use the same code with 300+ models
  • Unified Interface: One API for all providers (OpenAI, Anthropic, Google, etc.)
  • Smart Fallbacks: Automatic failover between providers
  • Type Safety: Full type hints and IDE support
  • Async Support: Native async/await capabilities
  • Provider Agnostic: Switch models with just a string change

📖 Documentation

Core Concepts

API Reference

Providers

💡 Example

from onellm import ChatCompletion

# Basic usage (identical to OpenAI's client)
response = ChatCompletion.create(
    model="openai/gpt-4",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message["content"])

🤝 Support


Back to top

Copyright © 2025 Ran Aroussi.
Licensed under Apache 2.0 license.