chore: add the ability of lru cache for api v3 to improve the inference speed when exchange model weights (#1058)

* chore: add the ability of lru cache for api v3 to improve the inference speed when exchange model weights

* chore: Dockerfile start api v3

* chore: api default port from 127.0.0.1 to 0.0.0.0

* chore: make gpu happy when do tts

* chore: rollback Dockerfile

* chore: fix

* chore: fix

---------

Co-authored-by: kevin.zhang <kevin.zhang@cardinfolink.com>
This commit is contained in:
Kevin Zhang
2024-05-19 17:15:56 +08:00
committed by GitHub
parent 2cafde159c
commit 50c3664496
2 changed files with 473 additions and 0 deletions

View File

@@ -182,6 +182,12 @@ class TTS_Config:
def __repr__(self):
return self.__str__()
def __hash__(self):
return hash(self.configs_path)
def __eq__(self, other):
return isinstance(other, TTS_Config) and self.configs_path == other.configs_path
class TTS:
def __init__(self, configs: Union[dict, str, TTS_Config]):