Нет описания

PoTaTo 4980237642 Add Warning about current model. (#1013) 10 месяцев назад
.github 62eae262c2 Make WebUI and API code cleaner (+ 1.5 fixes) (#703) 1 год назад
docs 4980237642 Add Warning about current model. (#1013) 10 месяцев назад
fish_speech e4d71110b7 move data soft link (#1010) 10 месяцев назад
tools 352d076db7 Fix tools.api_server fails to run problem (#1011) 10 месяцев назад
.dockerignore e413df7145 perf: Optimizing docker builds (#547) 1 год назад
.gitignore e4d71110b7 move data soft link (#1010) 10 месяцев назад
.pre-commit-config.yaml 07f97ba3fc [pre-commit.ci] pre-commit autoupdate 1 год назад
.project-root 5707699dfd Handle adaptive number of codebooks 2 лет назад
.readthedocs.yaml fe293ca492 Use readthedocs instead of github action 2 лет назад
API_FLAGS.txt 306ff66d9e fix flags 10 месяцев назад
LICENSE a5629ad60a Update LICENSE (#740) 1 год назад
README.md 4980237642 Add Warning about current model. (#1013) 10 месяцев назад
docker-compose.dev.yml f6c56c68d4 Update docker-compose.dev.yml 1 год назад
dockerfile f54c50fe62 "Ensure the version of checkpoints" (#752) 1 год назад
dockerfile.dev 23fa4d7e38 Fix dockerfile for `pyaudio` (#623) 1 год назад
entrypoint.sh 62eae262c2 Make WebUI and API code cleaner (+ 1.5 fixes) (#703) 1 год назад
inference.ipynb 3d31a80ad1 Fix errors in mkdocs.yml and ipynb (#988) 10 месяцев назад
mkdocs.yml 75d7ecb5b5 Optimize documents (#994) 10 месяцев назад
pyproject.toml a980b9efcf Update pyproject.toml, add audiotools req (#997) 10 месяцев назад
pyrightconfig.json 6d57066e52 Update pre-commit hook 2 лет назад
uv.lock 9021a57dce Fix timbre problem (#1009) 10 месяцев назад

README.md

Fish Speech

**English** | [简体中文](docs/README.zh.md) | [Portuguese](docs/README.pt-BR.md) | [日本語](docs/README.ja.md) | [한국어](docs/README.ko.md)




[!IMPORTANT] License Notice
This codebase is released under Apache License and all model weights are released under CC-BY-NC-SA-4.0 License. Please refer to LICENSE for more details.

[!WARNING] Legal Disclaimer
We do not hold any responsibility for any illegal usage of the codebase. Please refer to your local laws about DMCA and other related laws.

[!WARNING] About Bad Performance

We are sorry about the current model's performance, there's some existing bugs in the repo. And we are working hard to solve it (won't be long).

Current bad performance may include timbre switch and timbre decline for long sequence.

If you are willing to help us solve these problems, pull requests are welcome :)


🎉 Announcement

We are excited to announce that we have rebranded to OpenAudio — introducing a revolutionary new series of advanced Text-to-Speech models that builds upon the foundation of Fish-Speech.

We are proud to release OpenAudio-S1 as the first model in this series, delivering significant improvements in quality, performance, and capabilities.

OpenAudio-S1 comes in two versions: OpenAudio-S1 and OpenAudio-S1-mini. Both models are now available on Fish Audio Playground (for OpenAudio-S1) and Hugging Face (for OpenAudio-S1-mini).

Visit the OpenAudio website for blog & tech report.

Highlights ✨

Excellent TTS quality

We use Seed TTS Eval Metrics to evaluate the model performance, and the results show that OpenAudio S1 achieves 0.008 WER and 0.004 CER on English text, which is significantly better than previous models. (English, auto eval, based on OpenAI gpt-4o-transcribe, speaker distance using Revai/pyannote-wespeaker-voxceleb-resnet34-LM)

Model Word Error Rate (WER) Character Error Rate (CER) Speaker Distance
S1 0.008 0.004 0.332
S1-mini 0.011 0.005 0.380

Best Model in TTS-Arena2 🏆

OpenAudio S1 has achieved the #1 ranking on TTS-Arena2, the benchmark for text-to-speech evaluation:

TTS-Arena2 Ranking

Speech Control

OpenAudio S1 supports a variety of emotional, tone, and special markers to enhance speech synthesis:

  • Basic emotions:

    (angry) (sad) (excited) (surprised) (satisfied) (delighted) 
    (scared) (worried) (upset) (nervous) (frustrated) (depressed)
    (empathetic) (embarrassed) (disgusted) (moved) (proud) (relaxed)
    (grateful) (confident) (interested) (curious) (confused) (joyful)
    
  • Advanced emotions:

    (disdainful) (unhappy) (anxious) (hysterical) (indifferent) 
    (impatient) (guilty) (scornful) (panicked) (furious) (reluctant)
    (keen) (disapproving) (negative) (denying) (astonished) (serious)
    (sarcastic) (conciliative) (comforting) (sincere) (sneering)
    (hesitating) (yielding) (painful) (awkward) (amused)
    
  • Tone markers:

    (in a hurry tone) (shouting) (screaming) (whispering) (soft tone)
    
  • Special audio effects:

    (laughing) (chuckling) (sobbing) (crying loudly) (sighing) (panting)
    (groaning) (crowd laughing) (background laughter) (audience laughing)
    

You can also use Ha,ha,ha to control, there's many other cases waiting to be explored by yourself.

(Support for English, Chinese and Japanese now, and more languages is coming soon!)

Two Type of Models

Model Size Availability Features
S1 4B parameters Avaliable on fish.audio Full-featured flagship model
S1-mini 0.5B parameters Avaliable on huggingface hf space Distilled version with core capabilities

Both S1 and S1-mini incorporate online Reinforcement Learning from Human Feedback (RLHF).

Features

  1. Zero-shot & Few-shot TTS: Input a 10 to 30-second vocal sample to generate high-quality TTS output. For detailed guidelines, see Voice Cloning Best Practices.

  2. Multilingual & Cross-lingual Support: Simply copy and paste multilingual text into the input box—no need to worry about the language. Currently supports English, Japanese, Korean, Chinese, French, German, Arabic, and Spanish.

  3. No Phoneme Dependency: The model has strong generalization capabilities and does not rely on phonemes for TTS. It can handle text in any language script.

  4. Highly Accurate: Achieves a low CER (Character Error Rate) of around 0.4% and WER (Word Error Rate) of around 0.8% for Seed-TTS Eval.

  5. Fast: With fish-tech acceleration, the real-time factor is approximately 1:5 on an Nvidia RTX 4060 laptop and 1:15 on an Nvidia RTX 4090.

  6. WebUI Inference: Features an easy-to-use, Gradio-based web UI compatible with Chrome, Firefox, Edge, and other browsers.

  7. GUI Inference: Offers a PyQt6 graphical interface that works seamlessly with the API server. Supports Linux, Windows, and macOS. See GUI.

  8. Deploy-Friendly: Easily set up an inference server with native support for Linux, Windows (MacOS comming soon), minimizing speed loss.

Media & Demos

### **Social Media** Latest Demo on X ### **Interactive Demos** Try OpenAudio S1 Try S1 Mini ### **Video Showcases** OpenAudio S1 Video ### **Audio Samples**
High-quality audio samples will be available soon, demonstrating our multilingual TTS capabilities across different languages and emotions.


Documents

Credits

Tech Report (V1.4)

@misc{fish-speech-v1.4,
      title={Fish-Speech: Leveraging Large Language Models for Advanced Multilingual Text-to-Speech Synthesis},
      author={Shijia Liao and Yuxuan Wang and Tianyu Li and Yifan Cheng and Ruoyi Zhang and Rongzhi Zhou and Yijin Xing},
      year={2024},
      eprint={2411.01156},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2411.01156},
}