chore: add the ability of lru cache for api v3 to improve the inference speed when exchange model weights (#1058)
* chore: add the ability of lru cache for api v3 to improve the inference speed when exchange model weights * chore: Dockerfile start api v3 * chore: api default port from 127.0.0.1 to 0.0.0.0 * chore: make gpu happy when do tts * chore: rollback Dockerfile * chore: fix * chore: fix --------- Co-authored-by: kevin.zhang <kevin.zhang@cardinfolink.com>
This commit is contained in:
@@ -182,6 +182,12 @@ class TTS_Config:
|
||||
def __repr__(self):
|
||||
return self.__str__()
|
||||
|
||||
def __hash__(self):
|
||||
return hash(self.configs_path)
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, TTS_Config) and self.configs_path == other.configs_path
|
||||
|
||||
|
||||
class TTS:
|
||||
def __init__(self, configs: Union[dict, str, TTS_Config]):
|
||||
|
||||
Reference in New Issue
Block a user