- Added support for running CLI and Ollama server via Docker - Introduced tests for local embeddings model and standalone Docker setup - Enabled conditional Ollama server launch via LLM_PROVIDER
2 lines
47 B
Plaintext
2 lines
47 B
Plaintext
init-ollama.sh text eol=lf
|
|
build.sh text eol=lf |