optimize the structure
This commit is contained in:
117
README.md
117
README.md
@@ -17,14 +17,6 @@ A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.<br><br>
|
||||
|
||||
---
|
||||
|
||||
> Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!
|
||||
|
||||
Unseen speakers few-shot fine-tuning demo:
|
||||
|
||||
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
|
||||
|
||||
For users in China region, you can use AutoDL Cloud Docker to experience the full functionality online: https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official
|
||||
|
||||
## Features:
|
||||
|
||||
1. **Zero-shot TTS:** Input a 5-second vocal sample and experience instant text-to-speech conversion.
|
||||
@@ -35,19 +27,29 @@ For users in China region, you can use AutoDL Cloud Docker to experience the ful
|
||||
|
||||
4. **WebUI Tools:** Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.
|
||||
|
||||
## Environment Preparation
|
||||
**Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!**
|
||||
|
||||
If you are a Windows user (tested with win>=10) you can install directly via the prezip. Just download the [prezip](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true), unzip it and double-click go-webui.bat to start GPT-SoVITS-WebUI.
|
||||
Unseen speakers few-shot fine-tuning demo:
|
||||
|
||||
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
|
||||
|
||||
## Installation
|
||||
|
||||
For users in China region, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official) to use AutoDL Cloud Docker to experience the full functionality online.
|
||||
|
||||
### Tested Environments
|
||||
|
||||
- Python 3.9, PyTorch 2.0.1, CUDA 11
|
||||
- Python 3.10.13, PyTorch 2.1.2, CUDA 12.3
|
||||
- Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon, GPU)
|
||||
- Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon)
|
||||
|
||||
_Note: numba==0.56.4 require py<3.11_
|
||||
_Note: numba==0.56.4 requires py<3.11_
|
||||
|
||||
### Quick Install with Conda
|
||||
### Windows
|
||||
|
||||
If you are a Windows user (tested with win>=10), you can directly download the [pre-packaged distribution](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true) and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI.
|
||||
|
||||
### Linux
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
@@ -55,15 +57,37 @@ conda activate GPTSoVits
|
||||
bash install.sh
|
||||
```
|
||||
|
||||
### macOS
|
||||
|
||||
Only Macs that meet the following conditions can train models:
|
||||
|
||||
- Mac computers with Apple silicon
|
||||
- macOS 12.3 or later
|
||||
- Xcode command-line tools installed by running `xcode-select --install`
|
||||
|
||||
**All Macs can do inference with CPU, which has been demonstrated to outperform GPU inference.**
|
||||
|
||||
First make sure you have installed FFmpeg by running `brew install ffmpeg` or `conda install ffmpeg`, then install by using the following commands:
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
conda activate GPTSoVits
|
||||
|
||||
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
_Note: Training models will only work if you've installed PyTorch Nightly._
|
||||
|
||||
### Install Manually
|
||||
|
||||
#### Pip Packages
|
||||
#### Install Dependences
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
#### FFmpeg
|
||||
#### Install FFmpeg
|
||||
|
||||
##### Conda Users
|
||||
|
||||
@@ -79,57 +103,10 @@ sudo apt install libsox-dev
|
||||
conda install -c conda-forge 'ffmpeg<7'
|
||||
```
|
||||
|
||||
##### MacOS Users
|
||||
|
||||
```bash
|
||||
brew install ffmpeg
|
||||
```
|
||||
|
||||
##### Windows Users
|
||||
|
||||
Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root.
|
||||
|
||||
### Pretrained Models
|
||||
|
||||
Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`.
|
||||
|
||||
For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`.
|
||||
|
||||
Users in China region can download these two models by entering the links below and clicking "Download a copy"
|
||||
|
||||
- [GPT-SoVITS Models](https://www.icloud.com.cn/iclouddrive/056y_Xog_HXpALuVUjscIwTtg#GPT-SoVITS_Models)
|
||||
|
||||
- [UVR5 Weights](https://www.icloud.com.cn/iclouddrive/0bekRKDiJXboFhbfm3lM2fVbA#UVR5_Weights)
|
||||
|
||||
For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/damo_asr/models`.
|
||||
|
||||
### For Mac Users
|
||||
|
||||
If you are a Mac user, make sure you meet the following conditions for training and inferencing with GPU:
|
||||
|
||||
- Mac computers with Apple silicon
|
||||
- macOS 12.3 or later
|
||||
- Xcode command-line tools installed by running `xcode-select --install`
|
||||
|
||||
_Other Macs can do inference with CPU only._
|
||||
|
||||
Then install by using the following commands:
|
||||
|
||||
#### Create Environment
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
conda activate GPTSoVits
|
||||
```
|
||||
|
||||
#### Install Requirements
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
pip uninstall torch torchaudio
|
||||
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
```
|
||||
|
||||
### Using Docker
|
||||
|
||||
#### docker-compose.yaml configuration
|
||||
@@ -157,6 +134,20 @@ As above, modify the corresponding parameters based on your actual situation, th
|
||||
docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx
|
||||
```
|
||||
|
||||
## Pretrained Models
|
||||
|
||||
Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`.
|
||||
|
||||
For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`.
|
||||
|
||||
Users in China region can download these two models by entering the links below and clicking "Download a copy"
|
||||
|
||||
- [GPT-SoVITS Models](https://www.icloud.com.cn/iclouddrive/056y_Xog_HXpALuVUjscIwTtg#GPT-SoVITS_Models)
|
||||
|
||||
- [UVR5 Weights](https://www.icloud.com.cn/iclouddrive/0bekRKDiJXboFhbfm3lM2fVbA#UVR5_Weights)
|
||||
|
||||
For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/damo_asr/models`.
|
||||
|
||||
## Dataset Format
|
||||
|
||||
The TTS annotation .list file format:
|
||||
@@ -229,8 +220,6 @@ python ./tools/damo_asr/WhisperASR.py -i <input> -o <output> -f <file_name.list>
|
||||
A custom list save path is enabled
|
||||
## Credits
|
||||
|
||||
|
||||
|
||||
Special thanks to the following projects and contributors:
|
||||
|
||||
- [ar-vits](https://github.com/innnky/ar-vits)
|
||||
|
||||
Reference in New Issue
Block a user