Fix Non-functional Project: Add Multi-source Pretrained Model and UVR5 Download, Enable , Colab Compatibility, and FastAPI Optimization (#2300)
* Replace the outdated link and pin dependencies * Update Colab * Add Docs * . * Update Install.sh, Support multi download source * . * modified path * Add source * Update URL * fix bugs * fix bug * fix bugs * Fix colab * update colab * Update Docs * update links
This commit is contained in:
14
README.md
14
README.md
@@ -64,7 +64,7 @@ If you are a Windows user (tested with win>=10), you can [download the integrate
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
conda activate GPTSoVits
|
||||
bash install.sh
|
||||
bash install.sh --source <HF|HF-Mirror|ModelScope> [--download-uvr5]
|
||||
```
|
||||
|
||||
### macOS
|
||||
@@ -72,14 +72,12 @@ bash install.sh
|
||||
**Note: The models trained with GPUs on Macs result in significantly lower quality compared to those trained on other devices, so we are temporarily using CPUs instead.**
|
||||
|
||||
1. Install Xcode command-line tools by running `xcode-select --install`.
|
||||
2. Install FFmpeg by running `brew install ffmpeg`.
|
||||
3. Install the program by running the following commands:
|
||||
2. Install the program by running the following commands:
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
conda activate GPTSoVits
|
||||
pip install -r extra-req.txt --no-deps
|
||||
pip install -r requirements.txt
|
||||
bash install.sh --source <HF|HF-Mirror|ModelScope> [--download-uvr5]
|
||||
```
|
||||
|
||||
### Install Manually
|
||||
@@ -146,13 +144,13 @@ docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-Docker
|
||||
|
||||
## Pretrained Models
|
||||
|
||||
**If `install.sh` runs successfully, you may skip No.1.**
|
||||
**If `install.sh` runs successfully, you may skip No.1,2,3**
|
||||
|
||||
**Users in China can [download all these models here](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e/dkxgpiy9zb96hob4#nVNhX).**
|
||||
|
||||
1. Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`.
|
||||
|
||||
2. Download G2PW models from [G2PWModel_1.1.zip](https://paddlespeech.cdn.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.1.zip), unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`.(Chinese TTS Only)
|
||||
2. Download G2PW models from [G2PWModel.zip(HF)](https://huggingface.co/XXXXRT/GPT-SoVITS-Pretrained/resolve/main/G2PWModel.zip)| [G2PWModel.zip(ModelScope)](https://www.modelscope.cn/models/XXXXRT/GPT-SoVITS-Pretrained/resolve/master/G2PWModel.zip), unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`.(Chinese TTS Only)
|
||||
|
||||
3. For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`.
|
||||
|
||||
@@ -262,7 +260,7 @@ Use v2 from v1 environment:
|
||||
|
||||
3. Download v2 pretrained models from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main/gsv-v2final-pretrained) and put them into `GPT_SoVITS\pretrained_models\gsv-v2final-pretrained`.
|
||||
|
||||
Chinese v2 additional: [G2PWModel_1.1.zip](https://paddlespeech.cdn.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.1.zip)(Download G2PW models, unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`.)
|
||||
Chinese v2 additional: [G2PWModel.zip(HF)](https://huggingface.co/XXXXRT/GPT-SoVITS-Pretrained/resolve/main/G2PWModel.zip)| [G2PWModel.zip(ModelScope)](https://www.modelscope.cn/models/XXXXRT/GPT-SoVITS-Pretrained/resolve/master/G2PWModel.zip)(Download G2PW models, unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`.)
|
||||
|
||||
## V3 Release Notes
|
||||
|
||||
|
||||
Reference in New Issue
Block a user