Explorar o código

Optimize Documents for Windows (#266)

* Add Windows Setup Help

* Optimize documents/bootscripts for Windows User

* Correct some description
spicysama hai 1 ano
pai
achega
dc8c834444
Modificáronse 5 ficheiros con 174 adicións e 119 borrados
  1. BIN=BIN
      docs/assets/figs/VS_1.jpg
  2. 55 42
      docs/en/index.md
  3. 55 38
      docs/zh/index.md
  4. 53 36
      install_env.bat
  5. 11 3
      start.bat

BIN=BIN
docs/assets/figs/VS_1.jpg


+ 55 - 42
docs/en/index.md

@@ -26,54 +26,67 @@ This codebase is released under the `BSD-3-Clause` license, and all models are r
 - GPU Memory: 4GB (for inference), 16GB (for fine-tuning)
 - System: Linux, Windows
 
-~~We recommend Windows users to use WSL2 or docker to run the codebase, or use the integrated environment developed by the community.~~
-
 ## Windows Setup
 
 Windows professional users may consider WSL2 or Docker to run the codebase.
 
 Non-professional Windows users can consider the following methods to run the codebase without a Linux environment (with model compilation capabilities aka `torch.compile`):
 
-0. Extract the project zip file.
-1. Click `install_env.bat` to install the environment.
-
-   1. You can decide whether to use a mirror site for downloading by editing the `USE_MIRROR` item in `install_env.bat`.
-   2. The default is `preview`, using a mirror site and the latest development version of torch (the only way to activate the compilation method).
-   3. `false` uses the original site to download the environment. `true` uses the mirror site to download the stable version of torch and other environments.
-
-2. (Optional, this step is to activate the model compilation environment)
-
-   1. Use the following links to download the `LLVM` compiler.
-
-      - [LLVM-17.0.6 (original site download)](https://huggingface.co/fishaudio/fish-speech-1/resolve/main/LLVM-17.0.6-win64.exe?download=true)
-      - [LLVM-17.0.6 (mirror site download)](https://hf-mirror.com/fishaudio/fish-speech-1/resolve/main/LLVM-17.0.6-win64.exe?download=true)
-      - After downloading `LLVM-17.0.6-win64.exe`, double-click to install, choose the appropriate installation location, and most importantly, check `Add Path to Current User` to add the environment variable.
-      - Confirm the installation is complete.
-
-   2. Download and install the Microsoft Visual C++ Redistributable Package to resolve potential .dll missing issues.
-      - [MSVC++ 14.40.33810.0 download](https://aka.ms/vs/17/release/vc_redist.x64.exe)
-
-3. Double-click `start.bat` to enter the Fish-Speech training and inference configuration WebUI page.
-
-   - Want to go directly to the inference page? Edit the `API_FLAGS.txt` in the project root directory, and modify the first three lines as follows:
-
-   ```text
-   --infer
-   # --api
-   # --listen ...
-   ...
-   ```
-
-   - Want to start the API server? Edit the API_FLAGS.txt in the project root directory, and modify the first three lines as follows:
-
-   ```text
-   # --infer
-   --api
-   --listen ...
-   ...
-   ```
-
-4. (Optional) Double-click run_cmd.bat to enter the conda/python command line environment of this project.
+<ol>
+   <li>Unzip the project package.</li>
+   <li>Click <code>install_env.bat</code> to install the environment.
+      <ul>
+            <li>You can decide whether to use a mirror site for downloads by editing the <code>USE_MIRROR</code> item in <code>install_env.bat</code>.</li>
+            <li><code>USE_MIRROR=false</code> downloads the latest stable version of <code>torch</code> from the original site. <code>USE_MIRROR=true</code> downloads the latest version of <code>torch</code> from a mirror site. The default is <code>true</code>.</li>
+            <li>You can decide whether to enable the compiled environment download by editing the <code>INSTALL_TYPE</code> item in <code>install_env.bat</code>.</li>
+            <li><code>INSTALL_TYPE=preview</code> downloads the preview version with the compiled environment. <code>INSTALL_TYPE=stable</code> downloads the stable version without the compiled environment.</li>
+      </ul>
+   </li>
+   <li>If step 2 has <code>USE_MIRROR=preview</code>, execute this step (optional, for activating the compiled model environment):
+      <ol>
+            <li>Download the LLVM compiler using the following links:
+               <ul>
+                  <li><a href="https://huggingface.co/fishaudio/fish-speech-1/resolve/main/LLVM-17.0.6-win64.exe?download=true">LLVM-17.0.6 (original site download)</a></li>
+                  <li><a href="https://hf-mirror.com/fishaudio/fish-speech-1/resolve/main/LLVM-17.0.6-win64.exe?download=true">LLVM-17.0.6 (mirror site download)</a></li>
+                  <li>After downloading <code>LLVM-17.0.6-win64.exe</code>, double-click to install it, choose an appropriate installation location, and most importantly, check <code>Add Path to Current User</code> to add to the environment variables.</li>
+                  <li>Confirm the installation is complete.</li>
+               </ul>
+            </li>
+            <li>Download and install the Microsoft Visual C++ Redistributable package to resolve potential .dll missing issues.
+               <ul>
+                  <li><a href="https://aka.ms/vs/17/release/vc_redist.x64.exe">MSVC++ 14.40.33810.0 Download</a></li>
+               </ul>
+            </li>
+            <li>Download and install Visual Studio Community Edition to obtain MSVC++ build tools, resolving LLVM header file dependencies.
+               <ul>
+                  <li><a href="https://visualstudio.microsoft.com/zh-hans/downloads/">Visual Studio Download</a></li>
+                  <li>After installing Visual Studio Installer, download Visual Studio Community 2022.</li>
+                  <li>Click the <code>Modify</code> button as shown below, find the <code>Desktop development with C++</code> option, and check it for download.</li>
+                  <p align="center">
+                     <img src="/assets/figs/VS_1.jpg" width="75%">
+                  </p>
+               </ul>
+            </li>
+      </ol>
+   </li>
+   <li>Double-click <code>start.bat</code> to enter the Fish-Speech training inference configuration WebUI page.
+      <ul>
+            <li>(Optional) Want to go directly to the inference page? Edit the <code>API_FLAGS.txt</code> in the project root directory and modify the first three lines as follows:
+               <pre><code>--infer
+# --api
+# --listen ...
+...</code></pre>
+            </li>
+            <li>(Optional) Want to start the API server? Edit the <code>API_FLAGS.txt</code> in the project root directory and modify the first three lines as follows:
+               <pre><code># --infer
+--api
+--listen ...
+...</code></pre>
+            </li>
+      </ul>
+   </li>
+   <li>(Optional) Double-click <code>run_cmd.bat</code> to enter the conda/python command line environment of this project.</li>
+</ol>
 
 ## Linux Setup
 

+ 55 - 38
docs/zh/index.md

@@ -26,50 +26,67 @@
 - GPU 内存: 4GB (用于推理), 16GB (用于微调)
 - 系统: Linux, Windows
 
-~~我们建议 Windows 用户使用 WSL2 或 docker 来运行代码库, 或者使用由社区开发的整合环境.~~
-
 ## Windows 配置
 
 Windows 专业用户可以考虑 WSL2 或 docker 来运行代码库。
 
 Windows 非专业用户可考虑以下为免 Linux 环境的基础运行方法(附带模型编译功能,即 `torch.compile`):
 
-0. 解压项目压缩包。
-1. 点击`install_env.bat`安装环境。
-   - 可以通过编辑`install_env.bat`的`USE_MIRROR`项来决定是否使用镜像站下载。
-   - 默认为`preview`, 使用镜像站且使用最新开发版本 torch(唯一激活编译方式)。
-   - `false`使用原始站下载环境。`true`为从镜像站下载稳定版本 torch 和其余环境。
-2. (可跳过,此步为激活编译模型环境)
-
-   1. 使用如下链接下载`LLVM`编译器。
-      - [LLVM-17.0.6 (原始站点下载)](https://huggingface.co/fishaudio/fish-speech-1/resolve/main/LLVM-17.0.6-win64.exe?download=true)
-      - [LLVM-17.0.6 (镜像站点下载)](https://hf-mirror.com/fishaudio/fish-speech-1/resolve/main/LLVM-17.0.6-win64.exe?download=true)
-      - 下载完`LLVM-17.0.6-win64.exe`后,双击进行安装,选择合适的安装位置,最重要的是勾选`Add Path to Current User`添加环境变量。
-      - 确认安装完成。
-   2. 下载安装`Microsoft Visual C++ 可再发行程序包`, 解决潜在`.dll`丢失问题。
-      - [MSVC++ 14.40.33810.0 下载](https://aka.ms/vs/17/release/vc_redist.x64.exe)
-
-3. 双击`start.bat`, 进入 Fish-Speech 训练推理配置 WebUI 页面。
-
-   - 想直接进入推理页面?编辑项目根目录下的`API_FLAGS.txt`, 前三行修改成如下格式:
-
-   ```text
-   --infer
-   # --api
-   # --listen ...
-   ...
-   ```
-
-   - 想启动 API 服务器?编辑项目根目录下的`API_FLAGS.txt`, 前三行修改成如下格式:
-
-   ```text
-   # --infer
-   --api
-   --listen ...
-   ...
-   ```
-
-4. (可选)双击`run_cmd.bat`进入本项目的 conda/python 命令行环境
+<ol>
+   <li>解压项目压缩包。</li>
+   <li>点击 install_env.bat 安装环境。
+      <ul>
+            <li>可以通过编辑 install_env.bat 的 <code>USE_MIRROR</code> 项来决定是否使用镜像站下载。</li>
+            <li><code>USE_MIRROR=false</code> 使用原始站下载最新稳定版 <code>torch</code> 环境。<code>USE_MIRROR=true</code> 为从镜像站下载最新 <code>torch</code> 环境。默认为 <code>true</code>。</li>
+            <li>可以通过编辑 install_env.bat 的 <code>INSTALL_TYPE</code> 项来决定是否启用可编译环境下载。</li>
+            <li><code>INSTALL_TYPE=preview</code> 下载开发版编译环境。<code>INSTALL_TYPE=stable</code> 下载稳定版不带编译环境。</li>
+      </ul>
+   </li>
+   <li>若第2步 INSTALL_TYPE=preview 则执行这一步(可跳过,此步为激活编译模型环境)
+      <ol>
+            <li>使用如下链接下载 LLVM 编译器。
+               <ul>
+                  <li><a href="https://huggingface.co/fishaudio/fish-speech-1/resolve/main/LLVM-17.0.6-win64.exe?download=true">LLVM-17.0.6(原站站点下载)</a></li>
+                  <li><a href="https://hf-mirror.com/fishaudio/fish-speech-1/resolve/main/LLVM-17.0.6-win64.exe?download=true">LLVM-17.0.6(镜像站点下载)</a></li>
+                  <li>下载完 LLVM-17.0.6-win64.exe 后,双击进行安装,选择合适的安装位置,最重要的是勾选 <code>Add Path to Current User</code> 添加环境变量。</li>
+                  <li>确认安装完成。</li>
+               </ul>
+            </li>
+            <li>下载安装 Microsoft Visual C++ 可再发行程序包,解决潜在 .dll 丢失问题。
+               <ul>
+                  <li><a href="https://aka.ms/vs/17/release/vc_redist.x64.exe">MSVC++ 14.40.33810.0 下载</a></li>
+               </ul>
+            </li>
+            <li>下载安装 Visual Studio 社区版以获取 MSVC++ 编译工具, 解决 LLVM 的头文件依赖问题。
+               <ul>
+                  <li><a href="https://visualstudio.microsoft.com/zh-hans/downloads/">Visual Studio 下载</a></li>
+                  <li>安装好Visual Studio Installer之后,下载Visual Studio Community 2022</li>
+                  <li>如下图点击<code>修改</code>按钮,找到<code>使用C++的桌面开发</code>项,勾选下载</li>
+                  <p align="center">
+                     <img src="/assets/figs/VS_1.jpg" width="75%">
+                  </p>
+               </ul>
+            </li>
+      </ol>
+   </li>
+   <li>双击 start.bat, 进入 Fish-Speech 训练推理配置 WebUI 页面。
+      <ul>
+            <li>(可选) 想直接进入推理页面?编辑项目根目录下的 <code>API_FLAGS.txt</code>, 前三行修改成如下格式:
+               <pre><code>--infer
+# --api
+# --listen ...
+...</code></pre>
+            </li>
+            <li>(可选) 想启动 API 服务器?编辑项目根目录下的 <code>API_FLAGS.txt</code>, 前三行修改成如下格式:
+               <pre><code># --infer
+--api
+--listen ...
+...</code></pre>
+            </li>
+      </ul>
+   </li>
+   <li>(可选)双击 <code>run_cmd.bat</code> 进入本项目的 conda/python 命令行环境</li>
+</ol>
 
 ## Linux 配置
 

+ 53 - 36
install_env.bat

@@ -1,9 +1,10 @@
 @echo off
 chcp 65001
 
-set USE_MIRROR=preview
+set USE_MIRROR=true
 set INSTALL_TYPE=preview
-echo use_mirror = %USE_MIRROR%
+echo "USE_MIRROR: %USE_MIRROR%"
+echo "INSTALL_TYPE: %INSTALL_TYPE%"
 setlocal enabledelayedexpansion
 
 cd /D "%~dp0"
@@ -124,11 +125,14 @@ if "!INSTALL_TYPE!" == "preview" (
     set "packages=!packages! triton_windows"
 )
 
-set "HF_ENDPOINT=https://hf-mirror.com"
-if "!USE_MIRROR!" == "false" (
-    set "HF_ENDPOINT=https://huggingface.co"
+set "HF_ENDPOINT=https://huggingface.co"
+set "no_proxy="
+if "!USE_MIRROR!" == "true" (
+    set "HF_ENDPOINT=https://hf-mirror.com"
+    set "no_proxy=localhost, 127.0.0.1, 0.0.0.0"
 )
 echo "HF_ENDPOINT: !HF_ENDPOINT!"
+echo "NO_PROXY: !no_proxy!"
 
 set "install_packages="
 for %%p in (%packages%) do (
@@ -138,47 +142,17 @@ for %%p in (%packages%) do (
     )
 )
 
-
 if not "!install_packages!"=="" (
     echo.
     echo Installing: !install_packages!
-
     for %%p in (!install_packages!) do (
-        if "!USE_MIRROR!"=="true" (
-            if "%%p"=="torch" (
-                %PIP_CMD% install torch --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu121 --no-warn-script-location
-            ) else if "%%p"=="torchvision" (
-                %PIP_CMD% install torchvision --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu121 --no-warn-script-location
-            ) else if "%%p"=="torchaudio" (
-                %PIP_CMD% install torchaudio --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu121 --no-warn-script-location
-            ) else if "%%p"=="openai-whisper" (
-                %PIP_CMD% install -i https://pypi.tuna.tsinghua.edu.cn/simple openai-whisper --no-warn-script-location
-            ) else if "%%p"=="fish-speech" (
-                %PIP_CMD% install -e . -i https://pypi.tuna.tsinghua.edu.cn/simple
-            )
-        ) 
-
-        if "!USE_MIRROR!"=="false" (
-            if "%%p"=="torch" (
-                %PIP_CMD% install torch==2.4.0.dev20240427+cu121 --index-url https://download.pytorch.org/whl/nightly/cu121 --no-warn-script-location
-            ) else if "%%p"=="torchvision" (
-                %PIP_CMD% install torchvision==0.19.0.dev20240428+cu121 --index-url https://download.pytorch.org/whl/nightly/cu121 --no-warn-script-location
-            ) else if "%%p"=="torchaudio" (
-                %PIP_CMD% install torchaudio==2.2.0.dev20240427+cu121 --index-url https://download.pytorch.org/whl/nightly/cu121 --no-warn-script-location
-            ) else if "%%p"=="openai-whisper" (
-                %PIP_CMD% install openai-whisper --no-warn-script-location
-            ) else if "%%p"=="fish-speech" (
-                %PIP_CMD% install -e .
-            )
-        )
-        
         if "!INSTALL_TYPE!"=="preview" (
             if "%%p"=="torch" (
                 set "WHEEL_FILE=torch-2.4.0.dev20240427+cu121-cp310-cp310-win_amd64.whl"
                 set "URL=!HF_ENDPOINT!/datasets/SpicyqSama007/windows_compile/resolve/main/torch-2.4.0.dev20240427_cu121-cp310-cp310-win_amd64.whl?download=true"
                 set "CHKSUM=b091308f4cb74e63d0323afd67c92f2279d9e488d8cbf467bcc7b939bcd74e0b"
                 :TORCH_DOWNLOAD
-		        echo "%CD%\!WHEEL_FILE!"
+                echo "%CD%\!WHEEL_FILE!"
                 if not exist "%CD%\!WHEEL_FILE!" (
                     call curl -Lk "!URL!" --output "!WHEEL_FILE!"
                 )
@@ -257,7 +231,50 @@ if not "!install_packages!"=="" (
             )
             
         )
+    )
+)
 
+set "install_packages="
+for %%p in (%packages%) do (
+    %PIP_CMD% show %%p >nul 2>&1
+    if errorlevel 1 (
+        set "install_packages=!install_packages! %%p"
+    )
+)
+
+if not "!install_packages!"=="" (
+    echo.
+    echo Installing: !install_packages!
+
+    for %%p in (!install_packages!) do (
+        if "!USE_MIRROR!"=="true" (
+            if "%%p"=="torch" (
+                %PIP_CMD% install torch --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu121 --no-warn-script-location
+            ) else if "%%p"=="torchvision" (
+                %PIP_CMD% install torchvision --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu121 --no-warn-script-location
+            ) else if "%%p"=="torchaudio" (
+                %PIP_CMD% install torchaudio --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu121 --no-warn-script-location
+            ) else if "%%p"=="openai-whisper" (
+                %PIP_CMD% install -i https://pypi.tuna.tsinghua.edu.cn/simple openai-whisper --no-warn-script-location
+            ) else if "%%p"=="fish-speech" (
+                %PIP_CMD% install -e . -i https://pypi.tuna.tsinghua.edu.cn/simple
+            )
+        ) 
+
+        if "!USE_MIRROR!"=="false" (
+            if "%%p"=="torch" (
+                %PIP_CMD% install torch --index-url https://download.pytorch.org/whl/nightly/cu121 --no-warn-script-location
+            ) else if "%%p"=="torchvision" (
+                %PIP_CMD% install torchvision --index-url https://download.pytorch.org/whl/nightly/cu121 --no-warn-script-location
+            ) else if "%%p"=="torchaudio" (
+                %PIP_CMD% install torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121 --no-warn-script-location
+            ) else if "%%p"=="openai-whisper" (
+                %PIP_CMD% install openai-whisper --no-warn-script-location
+            ) else if "%%p"=="fish-speech" (
+                %PIP_CMD% install -e .
+            )
+        )
+        
     )
 )
 echo Environment Check: Success.

+ 11 - 3
start.bat

@@ -1,17 +1,25 @@
 @echo off
 chcp 65001
 
-
+set USE_MIRROR=true
 set PYTHONPATH=%~dp0
 set PYTHON_CMD=%cd%\fishenv\env\python
 set API_FLAG_PATH=%~dp0API_FLAGS.txt
 
-set no_proxy="localhost, 127.0.0.1, 0.0.0.0"
-set HF_ENDPOINT=https://hf-mirror.com
+
 %PYTHON_CMD% .\tools\download_models.py
 
 setlocal enabledelayedexpansion
 
+set "HF_ENDPOINT=https://huggingface.co"
+set "no_proxy="
+if "%USE_MIRROR%" == "true" (
+    set "HF_ENDPOINT=https://hf-mirror.com"
+    set "no_proxy=localhost, 127.0.0.1, 0.0.0.0"
+)
+echo "HF_ENDPOINT: !HF_ENDPOINT!"
+echo "NO_PROXY: !no_proxy!"
+
 set "API_FLAGS="
 set "flags="