Sen descrición

Seefs f51b5bb0c8 Merge pull request #2455 from comeback01/french-translation hai 2 meses
.github f47d473e63 fix: release workflow show version hai 3 meses
bin d84b0b0f5d chore: add model parameter to the time_test script (#245) %!s(int64=2) %!d(string=hai) anos
common da1b51ac31 Merge branch 'upstream-main' into feature/pyro hai 2 meses
constant 8e3f9b1faa 🛡️ fix: prevent OOM on large/decompressed requests; skip heavy prompt meta when token count is disabled hai 2 meses
controller 83fbaba768 🚀 fix(model-sync): avoid unnecessary upstream fetch while keeping overwrite updates working hai 2 meses
docs 0c6d890f6e chore: update the relay openapi file hai 3 meses
dto 2a511c6ee4 fix: 支持传入system_instruction和systemInstruction两种风格系统提示词参数名 hai 2 meses
electron c1a696e6f0 chore(deps-dev): bump js-yaml from 4.1.0 to 4.1.1 in /electron hai 3 meses
logger a9a8676f7c fix: logger hai 4 meses
middleware 14c58aea77 fix: 支持小写bearer和Bearer后带多个空格 && 修复 WSS预扣费错误提取key的问题 hai 2 meses
model 42109c5840 feat(token): enhance error handling in ValidateUserToken for better clarity hai 2 meses
relay 0b1a562df9 Merge pull request #2477 from 1420970597/fix/anthropic-cache-billing hai 2 meses
router 4e69c98b42 Merge pull request #2412 from seefs001/pr-2372 hai 2 meses
service 88e394a976 Merge pull request #2513 from seefs001/fix/token-auth-bearer hai 2 meses
setting 559da6362a fix: revert model ratio hai 2 meses
types 8cb56fc319 🧹 fix: harden request-body size handling and error unwrapping hai 2 meses
web f51b5bb0c8 Merge pull request #2455 from comeback01/french-translation hai 2 meses
.dockerignore fe9b305232 fix: legal setting hai 4 meses
.env.example 531dfb2555 docs: document pyroscope env var hai 2 meses
.gitignore 5a64ae2a29 fix: 模型设置增加针对Vertex渠道过滤content[].part[].functionResponse.id的选项,默认启用 hai 2 meses
Dockerfile 48a17efade fix: health check hai 2 meses
LICENSE 4d8189f21b ⚖️ docs(LICENSE): update license information from Apache 2.0 to New API Licensing hai 7 meses
README.en.md da1b51ac31 Merge branch 'upstream-main' into feature/pyro hai 2 meses
README.fr.md da1b51ac31 Merge branch 'upstream-main' into feature/pyro hai 2 meses
README.ja.md da1b51ac31 Merge branch 'upstream-main' into feature/pyro hai 2 meses
README.md da1b51ac31 Merge branch 'upstream-main' into feature/pyro hai 2 meses
VERSION 7e80e2da3a fix: add a blank VERSION file (#135) %!s(int64=2) %!d(string=hai) anos
docker-compose.yml 67c321c4fb feat: add Umami and Google Analytics integration hai 4 meses
go.mod fcafadc6bb feat: pyroscope integrate hai 2 meses
go.sum fcafadc6bb feat: pyroscope integrate hai 2 meses
main.go fcafadc6bb feat: pyroscope integrate hai 2 meses
makefile 27bbd951f0 feat: use bun when develop locally hai 9 meses
new-api.service e1c7a4f41f format: package name -> github.com/QuantumNous/new-api (#2017) hai 4 meses

README.en.md

![new-api](/web/public/logo.png) # New API 🍥 **Next-Generation Large Model Gateway and AI Asset Management System**

中文 | English | Français | 日本語

license release docker docker GoReportCard

Quick StartKey FeaturesDeploymentDocumentationHelp

📝 Project Description

[!NOTE]
This is an open-source project developed based on One API

[!IMPORTANT]


🤝 Trusted Partners

No particular order

Cherry Studio Peking University UCloud Alibaba Cloud IO.NET


🙏 Special Thanks

JetBrains Logo

Thanks to JetBrains for providing free open-source development license for this project


🚀 Quick Start

Using Docker Compose (Recommended)

# Clone the project
git clone https://github.com/QuantumNous/new-api.git
cd new-api

# Edit docker-compose.yml configuration
nano docker-compose.yml

# Start the service
docker-compose up -d

Using Docker Commands
# Pull the latest image
docker pull calciumion/new-api:latest

# Using SQLite (default)
docker run --name new-api -d --restart always \
  -p 3000:3000 \
  -e TZ=Asia/Shanghai \
  -v ./data:/data \
  calciumion/new-api:latest

# Using MySQL
docker run --name new-api -d --restart always \
  -p 3000:3000 \
  -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" \
  -e TZ=Asia/Shanghai \
  -v ./data:/data \
  calciumion/new-api:latest

💡 Tip: -v ./data:/data will save data in the data folder of the current directory, you can also change it to an absolute path like -v /your/custom/path:/data


🎉 After deployment is complete, visit http://localhost:3000 to start using!

📖 For more deployment methods, please refer to Deployment Guide


📚 Documentation

### 📖 [Official Documentation](https://docs.newapi.pro/en/docs) | [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/QuantumNous/new-api)

Quick Navigation:

Category Link
🚀 Deployment Guide Installation Documentation
⚙️ Environment Configuration Environment Variables
📡 API Documentation API Documentation
❓ FAQ FAQ
💬 Community Interaction Communication Channels

✨ Key Features

For detailed features, please refer to Features Introduction

🎨 Core Functions

Feature Description
🎨 New UI Modern user interface design
🌍 Multi-language Supports Chinese, English, French, Japanese
🔄 Data Compatibility Fully compatible with the original One API database
📈 Data Dashboard Visual console and statistical analysis
🔒 Permission Management Token grouping, model restrictions, user management

💰 Payment and Billing

  • ✅ Online recharge (EPay, Stripe)
  • ✅ Pay-per-use model pricing
  • ✅ Cache billing support (OpenAI, Azure, DeepSeek, Claude, Qwen and all supported models)
  • ✅ Flexible billing policy configuration

🔐 Authorization and Security

  • 😈 Discord authorization login
  • 🤖 LinuxDO authorization login
  • 📱 Telegram authorization login
  • 🔑 OIDC unified authentication

🚀 Advanced Features

API Format Support:

Intelligent Routing:

  • ⚖️ Channel weighted random
  • 🔄 Automatic retry on failure
  • 🚦 User-level model rate limiting

Format Conversion:

  • 🔄 OpenAI ⇄ Claude Messages
  • 🔄 OpenAI ⇄ Gemini Chat
  • 🔄 Thinking-to-content functionality

Reasoning Effort Support:

View detailed configuration

OpenAI series models:

  • o3-mini-high - High reasoning effort
  • o3-mini-medium - Medium reasoning effort
  • o3-mini-low - Low reasoning effort
  • gpt-5-high - High reasoning effort
  • gpt-5-medium - Medium reasoning effort
  • gpt-5-low - Low reasoning effort

Claude thinking models:

  • claude-3-7-sonnet-20250219-thinking - Enable thinking mode

Google Gemini series models:

  • gemini-2.5-flash-thinking - Enable thinking mode
  • gemini-2.5-flash-nothinking - Disable thinking mode
  • gemini-2.5-pro-thinking - Enable thinking mode
  • gemini-2.5-pro-thinking-128 - Enable thinking mode with thinking budget of 128 tokens
  • You can also append -low, -medium, or -high to any Gemini model name to request the corresponding reasoning effort (no extra thinking-budget suffix needed).


🤖 Model Support

For details, please refer to API Documentation - Relay Interface

Model Type Description Documentation
🤖 OpenAI GPTs gpt-4-gizmo-* series -
🎨 Midjourney-Proxy Midjourney-Proxy(Plus) Documentation
🎵 Suno-API Suno API Documentation
🔄 Rerank Cohere, Jina Documentation
💬 Claude Messages format Documentation
🌐 Gemini Google Gemini format Documentation
🔧 Dify ChatFlow mode -
🎯 Custom Supports complete call address -

📡 Supported Interfaces

View complete interface list


🚢 Deployment

[!TIP] Latest Docker image: calciumion/new-api:latest

📋 Deployment Requirements

Component Requirement
Local database SQLite (Docker must mount /data directory)
Remote database MySQL ≥ 5.7.8 or PostgreSQL ≥ 9.6
Container engine Docker / Docker Compose

⚙️ Environment Variable Configuration

Common environment variable configuration
Variable Name Description Default Value
SESSION_SECRET Session secret (required for multi-machine deployment) -
CRYPTO_SECRET Encryption secret (required for Redis) -
SQL_DSN Database connection string -
REDIS_CONN_STRING Redis connection string -
STREAMING_TIMEOUT Streaming timeout (seconds) 300
STREAM_SCANNER_MAX_BUFFER_MB Max per-line buffer (MB) for the stream scanner; increase when upstream sends huge image/base64 payloads 64
MAX_REQUEST_BODY_MB Max request body size (MB, counted after decompression; prevents huge requests/zip bombs from exhausting memory). Exceeding it returns 413 32
AZURE_DEFAULT_API_VERSION Azure API version 2025-04-01-preview
ERROR_LOG_ENABLED Error log switch false
PYROSCOPE_URL Pyroscope server address -
PYROSCOPE_APP_NAME Pyroscope application name new-api
PYROSCOPE_BASIC_AUTH_USER Pyroscope basic auth user -
PYROSCOPE_BASIC_AUTH_PASSWORD Pyroscope basic auth password -
PYROSCOPE_MUTEX_RATE Pyroscope mutex sampling rate 5
PYROSCOPE_BLOCK_RATE Pyroscope block sampling rate 5
HOSTNAME Hostname tag for Pyroscope new-api

📖 Complete configuration: Environment Variables Documentation

🔧 Deployment Methods

Method 1: Docker Compose (Recommended)
# Clone the project
git clone https://github.com/QuantumNous/new-api.git
cd new-api

# Edit configuration
nano docker-compose.yml

# Start service
docker-compose up -d

Method 2: Docker Commands

Using SQLite:

docker run --name new-api -d --restart always \
  -p 3000:3000 \
  -e TZ=Asia/Shanghai \
  -v ./data:/data \
  calciumion/new-api:latest

Using MySQL:

docker run --name new-api -d --restart always \
  -p 3000:3000 \
  -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" \
  -e TZ=Asia/Shanghai \
  -v ./data:/data \
  calciumion/new-api:latest

💡 Path explanation:

  • ./data:/data - Relative path, data saved in the data folder of the current directory
  • You can also use absolute path, e.g.: /your/custom/path:/data

Method 3: BaoTa Panel
  1. Install BaoTa Panel (≥ 9.2.0 version)
  2. Search for New-API in the application store
  3. One-click installation

📖 Tutorial with images

⚠️ Multi-machine Deployment Considerations

[!WARNING]

  • Must set SESSION_SECRET - Otherwise login status inconsistent
  • Shared Redis must set CRYPTO_SECRET - Otherwise data cannot be decrypted

🔄 Channel Retry and Cache

Retry configuration: Settings → Operation Settings → General Settings → Failure Retry Count

Cache configuration:

  • REDIS_CONN_STRING: Redis cache (recommended)
  • MEMORY_CACHE_ENABLED: Memory cache

🔗 Related Projects

Upstream Projects

Project Description
One API Original project base
Midjourney-Proxy Midjourney interface support

Supporting Tools

Project Description
neko-api-key-tool Key quota query tool
new-api-horizon New API high-performance optimized version

💬 Help Support

📖 Documentation Resources

Resource Link
📘 FAQ FAQ
💬 Community Interaction Communication Channels
🐛 Issue Feedback Issue Feedback
📚 Complete Documentation Official Documentation

🤝 Contribution Guide

Welcome all forms of contribution!

  • 🐛 Report Bugs
  • 💡 Propose New Features
  • 📝 Improve Documentation
  • 🔧 Submit Code

🌟 Star History

[![Star History Chart](https://api.star-history.com/svg?repos=Calcium-Ion/new-api&type=Date)](https://star-history.com/#Calcium-Ion/new-api&Date)

### 💖 Thank you for using New API If this project is helpful to you, welcome to give us a ⭐️ Star! **[Official Documentation](https://docs.newapi.pro/en/docs)** • **[Issue Feedback](https://github.com/Calcium-Ion/new-api/issues)** • **[Latest Release](https://github.com/Calcium-Ion/new-api/releases)** Built with ❤️ by QuantumNous