Docker support and Ollama support (#47)

- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER
This commit is contained in:
Geeta Chauhan
2025-06-25 20:57:05 -07:00
committed by GitHub
parent 7abff0f354
commit 78ea029a0b
23 changed files with 2141 additions and 19 deletions

36
.env.example Normal file
View File

@@ -0,0 +1,36 @@
# This is an example .env file for the Trading Agent project.
# Copy this file to .env and fill in your API keys and environment configurations.
# "NOTE: When using for `docker` command do not use quotes around the values, otherwise environment variables will not be set."
# API Keys
# Set your OpenAI API key, for OpenAI, Ollama or other OpenAI-compatible models
OPENAI_API_KEY=<your-openai-key>
# Set your Finnhub API key
FINNHUB_API_KEY=<your_finnhub_api_key_here>
#LLM Configuration for OpenAI
# Set LLM_Provider to one of: openai, anthropic, google, openrouter or ollama,
LLM_PROVIDER=openai
# Set the API URL for the LLM backend
LLM_BACKEND_URL=https://api.openai.com/v1
# Uncomment for LLM Configuration for local ollama
#LLM_PROVIDER=ollama
## For Ollama running in the same container, /v1 added for OpenAI compatibility
#LLM_BACKEND_URL=http://localhost:11434/v1
# Set name of the Deep think model
LLM_DEEP_THINK_MODEL=llama3.2
## Setname of the quick think model
LLM_QUICK_THINK_MODEL=qwen3
# Set the name of the embedding model
LLM_EMBEDDING_MODEL=nomic-embed-text
# Agent Configuration
# Maximum number of debate rounds for the agent to engage in choose from 1, 3, 5
MAX_DEBATE_ROUNDS=1
# Set to False if you want to disable tools that access the internet
ONLINE_TOOLS=True