Sen descrición

Calcium-Ion b973d927df Merge pull request #631 from xqx333/main hai 1 ano
.github 120b5476e8 Create FUNDING.yml hai 1 ano
bin 6a34813bea chore: add model parameter to the time_test script (#245) %!s(int64=2) %!d(string=hai) anos
common b354af02a4 feat: support Azure Comm Service SMTP hai 1 ano
constant 0ad03de153 feat: implement channel settings configuration hai 1 ano
controller 727236e6e7 fix: 渠道标签开启下使用ID排序出错 hai 1 ano
dto 37d6925ef6 feat: 完善audio计费 hai 1 ano
middleware 0ad03de153 feat: implement channel settings configuration hai 1 ano
model 8514572452 Update channel.go hai 1 ano
relay c6ec3c95ef refactor: Update SystemInstructions type in GeminiChatRequest and adjust handling in CovertGemini2OpenAI hai 1 ano
router 270857e370 feat: support br hai 1 ano
service 8a1d16955a 1 hai 1 ano
web 4fbd5f8c93 feat: Enhance Home component to support language messaging hai 1 ano
.env.example 4e531938ee Modify the default gemini API to v1beta hai 1 ano
.gitignore 9076ec5c63 feat: 添加.env配置文件和初始化环境变量 hai 1 ano
BT.md 325f3167f4 Update BT.md hai 1 ano
Dockerfile fd67223320 fix: ci %!s(int64=2) %!d(string=hai) anos
LICENSE d09e197b6e Update LICENSE hai 1 ano
Midjourney.md baed00b02b Update README.md hai 1 ano
README.en.md d78dd9c5ba Update README hai 1 ano
README.md fd2f6ed440 Merge remote-tracking branch 'guoruqiang/main' into pr482-merge hai 1 ano
Rerank.md baed00b02b Update README.md hai 1 ano
Suno.md baed00b02b Update README.md hai 1 ano
VERSION f4450040b9 fix: add a blank VERSION file (#135) %!s(int64=2) %!d(string=hai) anos
docker-compose.yml deabfda278 Update docker-compose.yml hai 1 ano
go.mod 7ae97088c3 chore: Update dependencies and refactor JSON handling #614 hai 1 ano
go.sum 7ae97088c3 chore: Update dependencies and refactor JSON handling #614 hai 1 ano
main.go 727236e6e7 fix: 渠道标签开启下使用ID排序出错 hai 1 ano
makefile 6bc261f690 update makefile %!s(int64=2) %!d(string=hai) anos
one-api.service 3e20c6b4ab chore: update one-api.service %!s(int64=2) %!d(string=hai) anos

README.en.md

![new-api](/web/public/logo.png) # New API 🍥 Next Generation LLM Gateway and AI Asset Management System

license release docker docker GoReportCard

📝 Project Description

[!NOTE]
This is an open-source project developed based on One API

[!IMPORTANT]

  • Users must comply with OpenAI's Terms of Use and relevant laws and regulations. Not to be used for illegal purposes.
  • This project is for personal learning only. Stability is not guaranteed, and no technical support is provided.

✨ Key Features

  1. 🎨 New UI interface (some interfaces pending update)
  2. 🌍 Multi-language support (work in progress)
  3. 🎨 Added Midjourney-Proxy(Plus) interface support, Integration Guide
  4. 💰 Online recharge support, configurable in system settings:
    • EasyPay
  5. 🔍 Query usage quota by key:
  6. 📑 Configurable items per page in pagination
  7. 🔄 Compatible with original One API database (one-api.db)
  8. 💵 Support per-request model pricing, configurable in System Settings - Operation Settings
  9. ⚖️ Support channel weighted random selection
  10. 📈 Data dashboard (console)
  11. 🔒 Configurable model access per token
  12. 🤖 Telegram authorization login support:
    1. System Settings - Configure Login Registration - Allow Telegram Login
    2. Send /setdomain command to @Botfather
    3. Select your bot, then enter http(s)://your-website/login
    4. Telegram Bot name is the bot username without @
  13. 🎵 Added Suno API interface support, Integration Guide
  14. 🔄 Support for Rerank models, compatible with Cohere and Jina, can integrate with Dify, Integration Guide
  15. OpenAI Realtime API - Support for OpenAI's Realtime API, including Azure channels

Model Support

This version additionally supports:

  1. Third-party model gps (gpt-4-gizmo-*)
  2. Midjourney-Proxy(Plus) interface, Integration Guide
  3. Custom channels with full API URL support
  4. Suno API interface, Integration Guide
  5. Rerank models, supporting Cohere and Jina, Integration Guide
  6. Dify

You can add custom models gpt-4-gizmo-* in channels. These are third-party models and cannot be called with official OpenAI keys.

Additional Configurations Beyond One API

  • GENERATE_DEFAULT_TOKEN: Generate initial token for new users, default false
  • STREAMING_TIMEOUT: Set streaming response timeout, default 60 seconds
  • DIFY_DEBUG: Output workflow and node info to client for Dify channel, default true
  • FORCE_STREAM_OPTION: Override client stream_options parameter, default true
  • GET_MEDIA_TOKEN: Calculate image tokens, default true
  • GET_MEDIA_TOKEN_NOT_STREAM: Calculate image tokens in non-stream mode, default true
  • UPDATE_TASK: Update async tasks (Midjourney, Suno), default true
  • GEMINI_MODEL_MAP: Specify Gemini model versions (v1/v1beta), format: "model:version", comma-separated
  • COHERE_SAFETY_SETTING: Cohere model safety settings, options: NONE, CONTEXTUAL, STRICT, default NONE

Deployment

[!TIP] Latest Docker image: calciumion/new-api:latest
Default account: root, password: 123456
Update command:

> docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower -cR
> ```

### Requirements
- Local database (default): SQLite (Docker deployment must mount `/data` directory)
- Remote database: MySQL >= 5.7.8, PgSQL >= 9.6

### Docker Deployment
### Using Docker Compose (Recommended)
```shell
# Clone project
git clone https://github.com/Calcium-Ion/new-api.git
cd new-api
# Edit docker-compose.yml as needed
# Start
docker-compose up -d

Direct Docker Image Usage

# SQLite deployment:
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
# MySQL deployment (add -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi"), modify database connection parameters as needed
# Example:
docker run --name new-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest

Channel Retry

Channel retry is implemented, configurable in Settings->Operation Settings->General Settings. Cache recommended.
First retry uses same priority, second retry uses next priority, and so on.

Cache Configuration

  1. REDIS_CONN_STRING: Use Redis as cache
    • Example: REDIS_CONN_STRING=redis://default:redispw@localhost:49153
  2. MEMORY_CACHE_ENABLED: Enable memory cache, default false
    • Example: MEMORY_CACHE_ENABLED=true

Why Some Errors Don't Retry

Error codes 400, 504, 524 won't retry

To Enable Retry for 400

In Channel->Edit, set Status Code Override to:

{
  "400": "500"
}

Integration Guides

Related Projects

🌟 Star History

Star History Chart