|
|
10 月之前 | |
|---|---|---|
| .github | 1 年之前 | |
| docs | 10 月之前 | |
| fish_speech | 10 月之前 | |
| tools | 10 月之前 | |
| .dockerignore | 1 年之前 | |
| .gitignore | 1 年之前 | |
| .pre-commit-config.yaml | 1 年之前 | |
| .project-root | 2 年之前 | |
| .readthedocs.yaml | 2 年之前 | |
| API_FLAGS.txt | 10 月之前 | |
| LICENSE | 1 年之前 | |
| README.md | 10 月之前 | |
| docker-compose.dev.yml | 1 年之前 | |
| dockerfile | 1 年之前 | |
| dockerfile.dev | 1 年之前 | |
| entrypoint.sh | 1 年之前 | |
| inference.ipynb | 10 月之前 | |
| mkdocs.yml | 10 月之前 | |
| pyproject.toml | 10 月之前 | |
| pyrightconfig.json | 2 年之前 |
This codebase is released under Apache License and all model weights are released under CC-BY-NC-SA-4.0 License. Please refer to LICENSE for more details.
We are excited to announce that we have changed our name into OpenAudio, this will be a brand new series of Text-to-Speech model.
Demo available at Fish Audio Playground.
Visit the OpenAudio website for blog & tech report.
This model has ALL FEATURES that fish-speech had.
OpenAudio S1 supports a variety of emotional, tone, and special markers to enhance speech synthesis:
(angry) (sad) (disdainful) (excited) (surprised) (satisfied) (unhappy) (anxious) (hysterical) (delighted) (scared) (worried) (indifferent) (upset) (impatient) (nervous) (guilty) (scornful) (frustrated) (depressed) (panicked) (furious) (empathetic) (embarrassed) (reluctant) (disgusted) (keen) (moved) (proud) (relaxed) (grateful) (confident) (interested) (curious) (confused) (joyful) (disapproving) (negative) (denying) (astonished) (serious) (sarcastic) (conciliative) (comforting) (sincere) (sneering) (hesitating) (yielding) (painful) (awkward) (amused)
Also supports tone marker:
(in a hurry tone) (shouting) (screaming) (whispering) (soft tone)
There's a few special markers that are supported:
(laughing) (chuckling) (sobbing) (crying loudly) (sighing) (panting) (groaning) (crowd laughing) (background laughter) (audience laughing)
You can also use **Ha,ha,ha** to control, there's many other cases waiting to be explored by yourself.
S1-mini (0.5B, open-sourced): A distilled version of S1.
Both S1 and S1-mini incorporate online Reinforcement Learning from Human Feedback (RLHF).
Evaluations
Seed TTS Eval Metrics (English, auto eval, based on OpenAI gpt-4o-transcribe, speaker distance using Revai/pyannote-wespeaker-voxceleb-resnet34-LM):
We do not hold any responsibility for any illegal usage of the codebase. Please refer to your local laws about DMCA and other related laws.
It should be noted that the current model DOESN'T SUPPORT FINETUNE.
@misc{fish-speech-v1.4,
title={Fish-Speech: Leveraging Large Language Models for Advanced Multilingual Text-to-Speech Synthesis},
author={Shijia Liao and Yuxuan Wang and Tianyu Li and Yifan Cheng and Ruoyi Zhang and Rongzhi Zhou and Yijin Xing},
year={2024},
eprint={2411.01156},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2411.01156},
}