설명 없음

Lengyue 89474bbb3b support s1 model structure 10 달 전
.github 62eae262c2 Make WebUI and API code cleaner (+ 1.5 fixes) (#703) 1 년 전
docs 9735644c82 Update for OpenAudio S1 (#986) 10 달 전
fish_speech 89474bbb3b support s1 model structure 10 달 전
tools 89474bbb3b support s1 model structure 10 달 전
.dockerignore e413df7145 perf: Optimizing docker builds (#547) 1 년 전
.gitignore 40665e1a39 [fix]fix problems to let version 1.5 support sft (#774) 1 년 전
.pre-commit-config.yaml 07f97ba3fc [pre-commit.ci] pre-commit autoupdate 1 년 전
.project-root 5707699dfd Handle adaptive number of codebooks 2 년 전
.readthedocs.yaml fe293ca492 Use readthedocs instead of github action 2 년 전
API_FLAGS.txt 89474bbb3b support s1 model structure 10 달 전
LICENSE a5629ad60a Update LICENSE (#740) 1 년 전
README.md 89474bbb3b support s1 model structure 10 달 전
docker-compose.dev.yml f6c56c68d4 Update docker-compose.dev.yml 1 년 전
dockerfile f54c50fe62 "Ensure the version of checkpoints" (#752) 1 년 전
dockerfile.dev 23fa4d7e38 Fix dockerfile for `pyaudio` (#623) 1 년 전
entrypoint.sh 62eae262c2 Make WebUI and API code cleaner (+ 1.5 fixes) (#703) 1 년 전
inference.ipynb 89474bbb3b support s1 model structure 10 달 전
mkdocs.yml 24e5028eb2 Update mkdocs.yml (#936) 1 년 전
pyproject.toml 89474bbb3b support s1 model structure 10 달 전
pyrightconfig.json 6d57066e52 Update pre-commit hook 2 년 전

README.md

Fish Speech

**English** | [简体中文](docs/README.zh.md) | [Portuguese](docs/README.pt-BR.md) | [日本語](docs/README.ja.md) | [한국어](docs/README.ko.md)




This codebase is released under Apache License and all model weights are released under CC-BY-NC-SA-4.0 License. Please refer to LICENSE for more details.

We are excited to announce that we have changed our name into OpenAudio, this will be a brand new series of Text-to-Speech model.

Demo available at Fish Audio Playground.

Visit the OpenAudio website for blog & tech report.

Features

OpenAudio-S1 (Fish-Speech's new verison)

  1. This model has ALL FEATURES that fish-speech had.

  2. OpenAudio S1 supports a variety of emotional, tone, and special markers to enhance speech synthesis:

(angry) (sad) (disdainful) (excited) (surprised) (satisfied) (unhappy) (anxious) (hysterical) (delighted) (scared) (worried) (indifferent) (upset) (impatient) (nervous) (guilty) (scornful) (frustrated) (depressed) (panicked) (furious) (empathetic) (embarrassed) (reluctant) (disgusted) (keen) (moved) (proud) (relaxed) (grateful) (confident) (interested) (curious) (confused) (joyful) (disapproving) (negative) (denying) (astonished) (serious) (sarcastic) (conciliative) (comforting) (sincere) (sneering) (hesitating) (yielding) (painful) (awkward) (amused)

Also supports tone marker:

(in a hurry tone) (shouting) (screaming) (whispering) (soft tone)

There's a few special markers that are supported:

(laughing) (chuckling) (sobbing) (crying loudly) (sighing) (panting) (groaning) (crowd laughing) (background laughter) (audience laughing)

You can also use **Ha,ha,ha** to control, there's many other cases waiting to be explored by yourself.
  1. The OpenAudio S1 includes the following sizes:
  2. S1 (4B, proprietary): The full-sized model.
  3. S1-mini (0.5B, open-sourced): A distilled version of S1.

    Both S1 and S1-mini incorporate online Reinforcement Learning from Human Feedback (RLHF).

  4. Evaluations

    Seed TTS Eval Metrics (English, auto eval, based on OpenAI gpt-4o-transcribe, speaker distance using Revai/pyannote-wespeaker-voxceleb-resnet34-LM):

    • S1:
      • WER (Word Error Rate): 0.008
      • CER (Character Error Rate): 0.004
      • Distance: 0.332
    • S1-mini:
      • WER (Word Error Rate): 0.011
      • CER (Character Error Rate): 0.005
      • Distance: 0.380

Disclaimer

We do not hold any responsibility for any illegal usage of the codebase. Please refer to your local laws about DMCA and other related laws.

Videos

To be continued.

Documents

It should be noted that the current model DOESN'T SUPPORT FINETUNE.

Credits

Tech Report (V1.4)

@misc{fish-speech-v1.4,
      title={Fish-Speech: Leveraging Large Language Models for Advanced Multilingual Text-to-Speech Synthesis},
      author={Shijia Liao and Yuxuan Wang and Tianyu Li and Yifan Cheng and Ruoyi Zhang and Rongzhi Zhou and Yijin Xing},
      year={2024},
      eprint={2411.01156},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2411.01156},
}