| .. |
|
llama
|
65e646ae31
Support prompt auto truncating codebook
|
%!s(int64=2) %!d(string=hai) anos |
|
vqgan
|
e2413e25b1
Update vqgan default config & fix vqgan inference
|
%!s(int64=2) %!d(string=hai) anos |
|
api_server.py
|
939c52f273
Optimize kui usage (#67)
|
%!s(int64=2) %!d(string=hai) anos |
|
extract_model.py
|
c25c946695
Update vqgan toolchain
|
%!s(int64=2) %!d(string=hai) anos |
|
merge_asr_files.py
|
a811738b2d
Allow inplace transcripting. Fix some bugs. Add options.
|
%!s(int64=2) %!d(string=hai) anos |
|
to_flac.py
|
491b1d7125
rename folder
|
%!s(int64=2) %!d(string=hai) anos |
|
whisper_asr.py
|
a811738b2d
Allow inplace transcripting. Fix some bugs. Add options.
|
%!s(int64=2) %!d(string=hai) anos |