[!IMPORTANT] License Notice
This codebase and its associated model weights are released under FISH AUDIO RESEARCH LICENSE. Please refer to LICENSE for more details.[!WARNING] Legal Disclaimer
We do not hold any responsibility for any illegal usage of the codebase. Please refer to your local laws about DMCA and other related laws.
Here are the official documents for Fish Speech, follow the instructions to get started easily.
Best Text-to-speech system among both open source and closed source
Fish Audio S2 is the latest model developed by Fish Audio, designed to generate speech that sounds natural, realistic, and emotionally rich — not robotic, not flat, and not constrained to studio-style narration.
Fish Audio S2 focuses on daily conversation and dialogue, which enables native multi-speaker and multi-turn generation. Also supports instruction control.
The S2 series contains several models, the open-sourced model is S2-Pro, which is best model in the collection.
Visit the Fish Audio website for live playground.
| Model | Size | Availability | Description |
|---|---|---|---|
| S2-Pro | 4B parameters | huggingface | Full-featured flagship model with maximum quality and stability |
| S2-Flash | - - - - | fish.audio | Our closed source model with faster speed and lower latency |
More details of the model can be found in the technical report.
Fish Audio S2 enables localized control over speech generation by embedding natural-language instructions directly at specific word or phrase positions within the text. Rather than relying on a fixed set of predefined tags, S2 accepts free-form textual descriptions — such as [whisper in small voice], [professional broadcast tone], or [pitch up] — allowing open-ended expression control at the word level.
Fish Audio S2 supports high-quality multilingual text-to-speech without requiring phonemes or language-specific preprocessing. Including:
English, Chinese, Japanese, Korean, Arabics, German, French...
AND MORE!
The list is constantly expanding, check Fish Audio for the latest releases.
Fish Audio S2 allows users to upload reference audio with multi-speaker, the model will deal with every speaker's feature via <|speaker:i|> token. Then you can control the model's performance with the speaker id token, allowing a single generation to include multiple speakers. You no longer need to upload reference audio separately for each speaker.
Thanks to the expansion of the model context, our model can now use previous information to improve the expressiveness of subsequent generated content, thereby increasing the naturalness of the content.
Fish Audio S2 supports accurate voice cloning using a short reference sample (typically 10–30 seconds). The model captures timbre, speaking style, and emotional tendencies, producing realistic and consistent cloned voices without additional fine-tuning.
@misc{fish-speech-v1.4,
title={Fish-Speech: Leveraging Large Language Models for Advanced Multilingual Text-to-Speech Synthesis},
author={Shijia Liao and Yuxuan Wang and Tianyu Li and Yifan Cheng and Ruoyi Zhang and Rongzhi Zhou and Yijin Xing},
year={2024},
eprint={2411.01156},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2411.01156},
}