Nav apraksta

Calcium-Ion 57f036747b Merge pull request #642 from MartialBE/fix_gemini_thinking 1 gadu atpakaļ
.github 5f3798053f Create FUNDING.yml 1 gadu atpakaļ
bin d84b0b0f5d chore: add model parameter to the time_test script (#245) 2 gadi atpakaļ
common 568d4e3f71 feat: support Azure Comm Service SMTP 1 gadu atpakaļ
constant f2809917f8 feat: implement channel settings configuration 1 gadu atpakaļ
controller 9a54b345c5 feat: support gemini-2.0-flash-thinking #639 #637 1 gadu atpakaļ
dto 97fdcd8e8f feat: 完善audio计费 1 gadu atpakaļ
middleware f2809917f8 feat: implement channel settings configuration 1 gadu atpakaļ
model 2d865eb735 refactor: Improve channel status update logic and clean up code 1 gadu atpakaļ
relay 158edab974 fix: Fix the issue where Gemini loses content when converting OpenAI format in the stream. 1 gadu atpakaļ
router 4c809277aa feat: support br 1 gadu atpakaļ
service c8a29251ac 1 1 gadu atpakaļ
web 9a54b345c5 feat: support gemini-2.0-flash-thinking #639 #637 1 gadu atpakaļ
.env.example 8eb32e9b3f Modify the default gemini API to v1beta 1 gadu atpakaļ
.gitignore 84f40b63b2 feat: 添加.env配置文件和初始化环境变量 1 gadu atpakaļ
BT.md 0dd1953cd6 Update BT.md 1 gadu atpakaļ
Dockerfile 66e30f4115 fix: ci 1 gadu atpakaļ
LICENSE fcb8506679 Update LICENSE 1 gadu atpakaļ
Midjourney.md bec18ed82d Update README.md 1 gadu atpakaļ
README.en.md cfdf6e48f1 Update README 1 gadu atpakaļ
README.md c8f437c13a Merge remote-tracking branch 'guoruqiang/main' into pr482-merge 1 gadu atpakaļ
Rerank.md bec18ed82d Update README.md 1 gadu atpakaļ
Suno.md bec18ed82d Update README.md 1 gadu atpakaļ
VERSION 7e80e2da3a fix: add a blank VERSION file (#135) 2 gadi atpakaļ
docker-compose.yml 07b1c9a4db Update docker-compose.yml 1 gadu atpakaļ
go.mod 79de02b05f chore: Update dependencies and refactor JSON handling #614 1 gadu atpakaļ
go.sum 79de02b05f chore: Update dependencies and refactor JSON handling #614 1 gadu atpakaļ
main.go 56ccb30a94 fix: 渠道标签开启下使用ID排序出错 1 gadu atpakaļ
makefile 6e54f01435 update makefile 1 gadu atpakaļ
one-api.service c6717307d0 chore: update one-api.service 2 gadi atpakaļ

README.en.md

![new-api](/web/public/logo.png) # New API 🍥 Next Generation LLM Gateway and AI Asset Management System

license release docker docker GoReportCard

📝 Project Description

[!NOTE]
This is an open-source project developed based on One API

[!IMPORTANT]

  • Users must comply with OpenAI's Terms of Use and relevant laws and regulations. Not to be used for illegal purposes.
  • This project is for personal learning only. Stability is not guaranteed, and no technical support is provided.

✨ Key Features

  1. 🎨 New UI interface (some interfaces pending update)
  2. 🌍 Multi-language support (work in progress)
  3. 🎨 Added Midjourney-Proxy(Plus) interface support, Integration Guide
  4. 💰 Online recharge support, configurable in system settings:
    • EasyPay
  5. 🔍 Query usage quota by key:
  6. 📑 Configurable items per page in pagination
  7. 🔄 Compatible with original One API database (one-api.db)
  8. 💵 Support per-request model pricing, configurable in System Settings - Operation Settings
  9. ⚖️ Support channel weighted random selection
  10. 📈 Data dashboard (console)
  11. 🔒 Configurable model access per token
  12. 🤖 Telegram authorization login support:
    1. System Settings - Configure Login Registration - Allow Telegram Login
    2. Send /setdomain command to @Botfather
    3. Select your bot, then enter http(s)://your-website/login
    4. Telegram Bot name is the bot username without @
  13. 🎵 Added Suno API interface support, Integration Guide
  14. 🔄 Support for Rerank models, compatible with Cohere and Jina, can integrate with Dify, Integration Guide
  15. OpenAI Realtime API - Support for OpenAI's Realtime API, including Azure channels

Model Support

This version additionally supports:

  1. Third-party model gps (gpt-4-gizmo-*)
  2. Midjourney-Proxy(Plus) interface, Integration Guide
  3. Custom channels with full API URL support
  4. Suno API interface, Integration Guide
  5. Rerank models, supporting Cohere and Jina, Integration Guide
  6. Dify

You can add custom models gpt-4-gizmo-* in channels. These are third-party models and cannot be called with official OpenAI keys.

Additional Configurations Beyond One API

  • GENERATE_DEFAULT_TOKEN: Generate initial token for new users, default false
  • STREAMING_TIMEOUT: Set streaming response timeout, default 60 seconds
  • DIFY_DEBUG: Output workflow and node info to client for Dify channel, default true
  • FORCE_STREAM_OPTION: Override client stream_options parameter, default true
  • GET_MEDIA_TOKEN: Calculate image tokens, default true
  • GET_MEDIA_TOKEN_NOT_STREAM: Calculate image tokens in non-stream mode, default true
  • UPDATE_TASK: Update async tasks (Midjourney, Suno), default true
  • GEMINI_MODEL_MAP: Specify Gemini model versions (v1/v1beta), format: "model:version", comma-separated
  • COHERE_SAFETY_SETTING: Cohere model safety settings, options: NONE, CONTEXTUAL, STRICT, default NONE

Deployment

[!TIP] Latest Docker image: calciumion/new-api:latest
Default account: root, password: 123456
Update command:

> docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower -cR
> ```

### Requirements
- Local database (default): SQLite (Docker deployment must mount `/data` directory)
- Remote database: MySQL >= 5.7.8, PgSQL >= 9.6

### Docker Deployment
### Using Docker Compose (Recommended)
```shell
# Clone project
git clone https://github.com/Calcium-Ion/new-api.git
cd new-api
# Edit docker-compose.yml as needed
# Start
docker-compose up -d

Direct Docker Image Usage

# SQLite deployment:
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
# MySQL deployment (add -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi"), modify database connection parameters as needed
# Example:
docker run --name new-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest

Channel Retry

Channel retry is implemented, configurable in Settings->Operation Settings->General Settings. Cache recommended.
First retry uses same priority, second retry uses next priority, and so on.

Cache Configuration

  1. REDIS_CONN_STRING: Use Redis as cache
    • Example: REDIS_CONN_STRING=redis://default:redispw@localhost:49153
  2. MEMORY_CACHE_ENABLED: Enable memory cache, default false
    • Example: MEMORY_CACHE_ENABLED=true

Why Some Errors Don't Retry

Error codes 400, 504, 524 won't retry

To Enable Retry for 400

In Channel->Edit, set Status Code Override to:

{
  "400": "500"
}

Integration Guides

Related Projects

🌟 Star History

Star History Chart