Просмотр исходного кода

fix: update broken import in quantize tool (#1249)

The `load_model` function in `fish_speech.models.text2semantic.inference`
was renamed to `init_model`, breaking the quantization tool with an
ImportError on startup.

Closes #1248
Gaël THEROND 1 неделя назад
Родитель
Сommit
27736aa1b5
1 измененных файлов с 2 добавлено и 2 удалено
  1. 2 2
      tools/llama/quantize.py

+ 2 - 2
tools/llama/quantize.py

@@ -13,7 +13,7 @@ import torch
 import torch.nn as nn
 import torch.nn.functional as F
 
-from fish_speech.models.text2semantic.inference import load_model
+from fish_speech.models.text2semantic.inference import init_model
 from fish_speech.models.text2semantic.llama import find_multiple
 
 ##### Quantization Primitives ######
@@ -445,7 +445,7 @@ def quantize(checkpoint_path: Path, mode: str, groupsize: int, timestamp: str) -
     print("Loading model ...")
     t0 = time.time()
 
-    model, _ = load_model(
+    model, _ = init_model(
         checkpoint_path=checkpoint_path,
         device=device,
         precision=precision,