27 Commits
v0.1.1 ... main

Author SHA1 Message Date
Edward Sun
fda4f664e8 Merge pull request #49 from Zhongyi-Lu/a
Exclude `.env` from Git.
2025-07-01 09:17:46 -07:00
Yijia Xiao
718df34932 Merge pull request #29 from ZeroAct/save_results
Save results
2025-06-26 00:28:30 -04:00
Max Wong
43aa9c5d09 Local Ollama (#53)
- Fix typo 'Start' 'End'
- Add llama3.1 selection
- Use 'quick_think_llm' model instead of hard-coding GPT
2025-06-26 00:27:01 -04:00
Yijia Xiao
26c5ba5a78 Revert "Docker support and Ollama support (#47)" (#57)
This reverts commit 78ea029a0b.
2025-06-26 00:07:58 -04:00
Geeta Chauhan
78ea029a0b Docker support and Ollama support (#47)
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER
2025-06-25 23:57:05 -04:00
Huijae Lee
ee3d499894 Merge branch 'TauricResearch:main' into save_results 2025-06-25 08:43:19 +09:00
Yijia Xiao
7abff0f354 Merge pull request #46 from AtharvSabde/patch-2
Updated requirements.txt based on latest commit
2025-06-23 20:40:58 -04:00
Yijia Xiao
b575bd0941 Merge pull request #52 from TauricResearch/dev
Merge dev into main. Add support for Anthropic and OpenRouter.
2025-06-23 20:38:14 -04:00
Zhongyi Lu
b8f712b170 Exclude .env from Git 2025-06-21 23:29:26 -07:00
Edward Sun
52284ce13c fixed anthropic support. Anthropic has different format of response when it has tool calls. Explicit handling added 2025-06-21 12:51:34 -07:00
Atharv Sabde
11804f88ff Updated requirements.txt based on latest commit
PULL REQUEST: Add support for other backends, such as OpenRouter and Ollama

it had two requirments missing. added those
2025-06-20 15:58:22 +05:30
Yijia Xiao
1e86e74314 Merge pull request #40 from RealMyth21/main
Updated README.md: Swap Trader and Management order.
2025-06-19 15:10:36 -04:00
Yijia Xiao
c2f897fc67 Merge pull request #43 from AtharvSabde/patch-1
fundamentals_analyst.py (spelling mistake in instruction: Makrdown -> Markdown)
2025-06-19 15:05:08 -04:00
Yijia Xiao
ed32081f57 Merge pull request #44 from TauricResearch/dev
Merge dev into main branch
2025-06-19 15:00:07 -04:00
Atharv Sabde
2af7ef3d79 fundamentals_analyst.py(spelling mistake.markdown) 2025-06-19 21:48:16 +05:30
Mithil Srungarapu
383deb72aa Updated README.md
The diagrams were switched, so I fixed it.
2025-06-18 19:08:10 -07:00
Edward Sun
7eaf4d995f update clear msg bc anthropic needs at least 1 msg in chat call 2025-06-15 23:14:47 -07:00
Edward Sun
da84ef43aa main works, cli bugs 2025-06-15 22:20:59 -07:00
Edward Sun
90b23e72f5 Merge pull request #25 from maxer137/main
Add support for other backends, such as OpenRouter and Ollama
2025-06-15 16:06:20 -07:00
ZeroAct
417b09712c refactor 2025-06-12 13:53:28 +09:00
saksham0161
570644d939 Fix ticker hardcoding in prompt (#28) 2025-06-11 19:43:39 -07:00
ZeroAct
9647359246 save reports & logs under results_dir 2025-06-12 11:25:07 +09:00
maxer137
99789f9cd1 Add support for other backends, such as OpenRouter and olama
This aims to offer alternative OpenAI capable api's.
This offers people to experiment with running the application locally
2025-06-11 14:19:25 +02:00
neo
a879868396 docs: add links to other language versions of README (#13)
Added language selection links to the README for easier access to translated versions: German, Spanish, French, Japanese, Korean, Portuguese, Russian, and Chinese.
2025-06-09 15:51:06 -07:00
Yijia-Xiao
0013415378 Add star history 2025-06-09 15:14:41 -07:00
Edward Sun
0fdfd35867 Fix default python usage config code 2025-06-08 13:16:10 -07:00
Edward Sun
e994e56c23 Remove EODHD from readme 2025-06-07 15:04:43 -07:00
18 changed files with 5762 additions and 68 deletions

1
.gitignore vendored
View File

@@ -6,3 +6,4 @@ src/
eval_results/ eval_results/
eval_data/ eval_data/
*.egg-info/ *.egg-info/
.env

1
.python-version Normal file
View File

@@ -0,0 +1 @@
3.10

View File

@@ -11,6 +11,18 @@
<a href="https://github.com/TauricResearch/" target="_blank"><img alt="Community" src="https://img.shields.io/badge/Join_GitHub_Community-TauricResearch-14C290?logo=discourse"/></a> <a href="https://github.com/TauricResearch/" target="_blank"><img alt="Community" src="https://img.shields.io/badge/Join_GitHub_Community-TauricResearch-14C290?logo=discourse"/></a>
</div> </div>
<div align="center">
<!-- Keep these links. Translations will automatically update with the README. -->
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=de">Deutsch</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=es">Español</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=fr">français</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ja">日本語</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ko">한국어</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=pt">Português</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ru">Русский</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=zh">中文</a>
</div>
--- ---
# TradingAgents: Multi-Agents LLM Financial Trading Framework # TradingAgents: Multi-Agents LLM Financial Trading Framework
@@ -19,6 +31,16 @@
> >
> So we decided to fully open-source the framework. Looking forward to building impactful projects with you! > So we decided to fully open-source the framework. Looking forward to building impactful projects with you!
<div align="center">
<a href="https://www.star-history.com/#TauricResearch/TradingAgents&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=TauricResearch/TradingAgents&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=TauricResearch/TradingAgents&type=Date" />
<img alt="TradingAgents Star History" src="https://api.star-history.com/svg?repos=TauricResearch/TradingAgents&type=Date" style="width: 80%; height: auto;" />
</picture>
</a>
</div>
<div align="center"> <div align="center">
🚀 [TradingAgents](#tradingagents-framework) | ⚡ [Installation & CLI](#installation-and-cli) | 🎬 [Demo](https://www.youtube.com/watch?v=90gr5lwjIho) | 📦 [Package Usage](#tradingagents-package) | 🤝 [Contributing](#contributing) | 📄 [Citation](#citation) 🚀 [TradingAgents](#tradingagents-framework) | ⚡ [Installation & CLI](#installation-and-cli) | 🎬 [Demo](https://www.youtube.com/watch?v=90gr5lwjIho) | 📦 [Package Usage](#tradingagents-package) | 🤝 [Contributing](#contributing) | 📄 [Citation](#citation)
@@ -58,7 +80,7 @@ Our framework decomposes complex trading tasks into specialized roles. This ensu
- Composes reports from the analysts and researchers to make informed trading decisions. It determines the timing and magnitude of trades based on comprehensive market insights. - Composes reports from the analysts and researchers to make informed trading decisions. It determines the timing and magnitude of trades based on comprehensive market insights.
<p align="center"> <p align="center">
<img src="assets/risk.png" width="70%" style="display: inline-block; margin: 0 2%;"> <img src="assets/trader.png" width="70%" style="display: inline-block; margin: 0 2%;">
</p> </p>
### Risk Management and Portfolio Manager ### Risk Management and Portfolio Manager
@@ -66,7 +88,7 @@ Our framework decomposes complex trading tasks into specialized roles. This ensu
- The Portfolio Manager approves/rejects the transaction proposal. If approved, the order will be sent to the simulated exchange and executed. - The Portfolio Manager approves/rejects the transaction proposal. If approved, the order will be sent to the simulated exchange and executed.
<p align="center"> <p align="center">
<img src="assets/trader.png" width="70%" style="display: inline-block; margin: 0 2%;"> <img src="assets/risk.png" width="70%" style="display: inline-block; margin: 0 2%;">
</p> </p>
## Installation and CLI ## Installation and CLI
@@ -92,7 +114,7 @@ pip install -r requirements.txt
### Required APIs ### Required APIs
You will also need the FinnHub API and EODHD API for financial data. All of our code is implemented with the free tier. You will also need the FinnHub API for financial data. All of our code is implemented with the free tier.
```bash ```bash
export FINNHUB_API_KEY=$YOUR_FINNHUB_API_KEY export FINNHUB_API_KEY=$YOUR_FINNHUB_API_KEY
``` ```
@@ -136,8 +158,9 @@ To use TradingAgents inside your code, you can import the `tradingagents` module
```python ```python
from tradingagents.graph.trading_graph import TradingAgentsGraph from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
ta = TradingAgentsGraph(debug=True, config=config) ta = TradingAgentsGraph(debug=True, config=DEFAULT_CONFIG.copy())
# forward propagate # forward propagate
_, decision = ta.propagate("NVDA", "2024-05-10") _, decision = ta.propagate("NVDA", "2024-05-10")

View File

@@ -1,6 +1,8 @@
from typing import Optional from typing import Optional
import datetime import datetime
import typer import typer
from pathlib import Path
from functools import wraps
from rich.console import Console from rich.console import Console
from rich.panel import Panel from rich.panel import Panel
from rich.spinner import Spinner from rich.spinner import Spinner
@@ -295,10 +297,27 @@ def update_display(layout, spinner_text=None):
# Add regular messages # Add regular messages
for timestamp, msg_type, content in message_buffer.messages: for timestamp, msg_type, content in message_buffer.messages:
# Convert content to string if it's not already
content_str = content
if isinstance(content, list):
# Handle list of content blocks (Anthropic format)
text_parts = []
for item in content:
if isinstance(item, dict):
if item.get('type') == 'text':
text_parts.append(item.get('text', ''))
elif item.get('type') == 'tool_use':
text_parts.append(f"[Tool: {item.get('name', 'unknown')}]")
else:
text_parts.append(str(item))
content_str = ' '.join(text_parts)
elif not isinstance(content_str, str):
content_str = str(content)
# Truncate message content if too long # Truncate message content if too long
if isinstance(content, str) and len(content) > 200: if len(content_str) > 200:
content = content[:197] + "..." content_str = content_str[:197] + "..."
all_messages.append((timestamp, msg_type, content)) all_messages.append((timestamp, msg_type, content_str))
# Sort by timestamp # Sort by timestamp
all_messages.sort(key=lambda x: x[0]) all_messages.sort(key=lambda x: x[0])
@@ -444,20 +463,30 @@ def get_user_selections():
) )
selected_research_depth = select_research_depth() selected_research_depth = select_research_depth()
# Step 5: Thinking agents # Step 5: OpenAI backend
console.print( console.print(
create_question_box( create_question_box(
"Step 5: Thinking Agents", "Select your thinking agents for analysis" "Step 5: OpenAI backend", "Select which service to talk to"
) )
) )
selected_shallow_thinker = select_shallow_thinking_agent() selected_llm_provider, backend_url = select_llm_provider()
selected_deep_thinker = select_deep_thinking_agent()
# Step 6: Thinking agents
console.print(
create_question_box(
"Step 6: Thinking Agents", "Select your thinking agents for analysis"
)
)
selected_shallow_thinker = select_shallow_thinking_agent(selected_llm_provider)
selected_deep_thinker = select_deep_thinking_agent(selected_llm_provider)
return { return {
"ticker": selected_ticker, "ticker": selected_ticker,
"analysis_date": analysis_date, "analysis_date": analysis_date,
"analysts": selected_analysts, "analysts": selected_analysts,
"research_depth": selected_research_depth, "research_depth": selected_research_depth,
"llm_provider": selected_llm_provider.lower(),
"backend_url": backend_url,
"shallow_thinker": selected_shallow_thinker, "shallow_thinker": selected_shallow_thinker,
"deep_thinker": selected_deep_thinker, "deep_thinker": selected_deep_thinker,
} }
@@ -683,6 +712,24 @@ def update_research_team_status(status):
for agent in research_team: for agent in research_team:
message_buffer.update_agent_status(agent, status) message_buffer.update_agent_status(agent, status)
def extract_content_string(content):
"""Extract string content from various message formats."""
if isinstance(content, str):
return content
elif isinstance(content, list):
# Handle Anthropic's list format
text_parts = []
for item in content:
if isinstance(item, dict):
if item.get('type') == 'text':
text_parts.append(item.get('text', ''))
elif item.get('type') == 'tool_use':
text_parts.append(f"[Tool: {item.get('name', 'unknown')}]")
else:
text_parts.append(str(item))
return ' '.join(text_parts)
else:
return str(content)
def run_analysis(): def run_analysis():
# First get all user selections # First get all user selections
@@ -694,12 +741,61 @@ def run_analysis():
config["max_risk_discuss_rounds"] = selections["research_depth"] config["max_risk_discuss_rounds"] = selections["research_depth"]
config["quick_think_llm"] = selections["shallow_thinker"] config["quick_think_llm"] = selections["shallow_thinker"]
config["deep_think_llm"] = selections["deep_thinker"] config["deep_think_llm"] = selections["deep_thinker"]
config["backend_url"] = selections["backend_url"]
config["llm_provider"] = selections["llm_provider"].lower()
# Initialize the graph # Initialize the graph
graph = TradingAgentsGraph( graph = TradingAgentsGraph(
[analyst.value for analyst in selections["analysts"]], config=config, debug=True [analyst.value for analyst in selections["analysts"]], config=config, debug=True
) )
# Create result directory
results_dir = Path(config["results_dir"]) / selections["ticker"] / selections["analysis_date"]
results_dir.mkdir(parents=True, exist_ok=True)
report_dir = results_dir / "reports"
report_dir.mkdir(parents=True, exist_ok=True)
log_file = results_dir / "message_tool.log"
log_file.touch(exist_ok=True)
def save_message_decorator(obj, func_name):
func = getattr(obj, func_name)
@wraps(func)
def wrapper(*args, **kwargs):
func(*args, **kwargs)
timestamp, message_type, content = obj.messages[-1]
content = content.replace("\n", " ") # Replace newlines with spaces
with open(log_file, "a") as f:
f.write(f"{timestamp} [{message_type}] {content}\n")
return wrapper
def save_tool_call_decorator(obj, func_name):
func = getattr(obj, func_name)
@wraps(func)
def wrapper(*args, **kwargs):
func(*args, **kwargs)
timestamp, tool_name, args = obj.tool_calls[-1]
args_str = ", ".join(f"{k}={v}" for k, v in args.items())
with open(log_file, "a") as f:
f.write(f"{timestamp} [Tool Call] {tool_name}({args_str})\n")
return wrapper
def save_report_section_decorator(obj, func_name):
func = getattr(obj, func_name)
@wraps(func)
def wrapper(section_name, content):
func(section_name, content)
if section_name in obj.report_sections and obj.report_sections[section_name] is not None:
content = obj.report_sections[section_name]
if content:
file_name = f"{section_name}.md"
with open(report_dir / file_name, "w") as f:
f.write(content)
return wrapper
message_buffer.add_message = save_message_decorator(message_buffer, "add_message")
message_buffer.add_tool_call = save_tool_call_decorator(message_buffer, "add_tool_call")
message_buffer.update_report_section = save_report_section_decorator(message_buffer, "update_report_section")
# Now start the display layout # Now start the display layout
layout = create_layout() layout = create_layout()
@@ -754,7 +850,7 @@ def run_analysis():
# Extract message content and type # Extract message content and type
if hasattr(last_message, "content"): if hasattr(last_message, "content"):
content = last_message.content content = extract_content_string(last_message.content) # Use the helper function
msg_type = "Reasoning" msg_type = "Reasoning"
else: else:
content = str(last_message) content = str(last_message)

View File

@@ -122,22 +122,44 @@ def select_research_depth() -> int:
return choice return choice
def select_shallow_thinking_agent() -> str: def select_shallow_thinking_agent(provider) -> str:
"""Select shallow thinking llm engine using an interactive selection.""" """Select shallow thinking llm engine using an interactive selection."""
# Define shallow thinking llm engine options with their corresponding model names # Define shallow thinking llm engine options with their corresponding model names
SHALLOW_AGENT_OPTIONS = [ SHALLOW_AGENT_OPTIONS = {
"openai": [
("GPT-4o-mini - Fast and efficient for quick tasks", "gpt-4o-mini"), ("GPT-4o-mini - Fast and efficient for quick tasks", "gpt-4o-mini"),
("GPT-4.1-nano - Ultra-lightweight model for basic operations", "gpt-4.1-nano"), ("GPT-4.1-nano - Ultra-lightweight model for basic operations", "gpt-4.1-nano"),
("GPT-4.1-mini - Compact model with good performance", "gpt-4.1-mini"), ("GPT-4.1-mini - Compact model with good performance", "gpt-4.1-mini"),
("GPT-4o - Standard model with solid capabilities", "gpt-4o"), ("GPT-4o - Standard model with solid capabilities", "gpt-4o"),
],
"anthropic": [
("Claude Haiku 3.5 - Fast inference and standard capabilities", "claude-3-5-haiku-latest"),
("Claude Sonnet 3.5 - Highly capable standard model", "claude-3-5-sonnet-latest"),
("Claude Sonnet 3.7 - Exceptional hybrid reasoning and agentic capabilities", "claude-3-7-sonnet-latest"),
("Claude Sonnet 4 - High performance and excellent reasoning", "claude-sonnet-4-0"),
],
"google": [
("Gemini 2.0 Flash-Lite - Cost efficiency and low latency", "gemini-2.0-flash-lite"),
("Gemini 2.0 Flash - Next generation features, speed, and thinking", "gemini-2.0-flash"),
("Gemini 2.5 Flash - Adaptive thinking, cost efficiency", "gemini-2.5-flash-preview-05-20"),
],
"openrouter": [
("Meta: Llama 4 Scout", "meta-llama/llama-4-scout:free"),
("Meta: Llama 3.3 8B Instruct - A lightweight and ultra-fast variant of Llama 3.3 70B", "meta-llama/llama-3.3-8b-instruct:free"),
("google/gemini-2.0-flash-exp:free - Gemini Flash 2.0 offers a significantly faster time to first token", "google/gemini-2.0-flash-exp:free"),
],
"ollama": [
("llama3.1 local", "llama3.1"),
("llama3.2 local", "llama3.2"),
] ]
}
choice = questionary.select( choice = questionary.select(
"Select Your [Quick-Thinking LLM Engine]:", "Select Your [Quick-Thinking LLM Engine]:",
choices=[ choices=[
questionary.Choice(display, value=value) questionary.Choice(display, value=value)
for display, value in SHALLOW_AGENT_OPTIONS for display, value in SHALLOW_AGENT_OPTIONS[provider.lower()]
], ],
instruction="\n- Use arrow keys to navigate\n- Press Enter to select", instruction="\n- Use arrow keys to navigate\n- Press Enter to select",
style=questionary.Style( style=questionary.Style(
@@ -158,11 +180,12 @@ def select_shallow_thinking_agent() -> str:
return choice return choice
def select_deep_thinking_agent() -> str: def select_deep_thinking_agent(provider) -> str:
"""Select deep thinking llm engine using an interactive selection.""" """Select deep thinking llm engine using an interactive selection."""
# Define deep thinking llm engine options with their corresponding model names # Define deep thinking llm engine options with their corresponding model names
DEEP_AGENT_OPTIONS = [ DEEP_AGENT_OPTIONS = {
"openai": [
("GPT-4.1-nano - Ultra-lightweight model for basic operations", "gpt-4.1-nano"), ("GPT-4.1-nano - Ultra-lightweight model for basic operations", "gpt-4.1-nano"),
("GPT-4.1-mini - Compact model with good performance", "gpt-4.1-mini"), ("GPT-4.1-mini - Compact model with good performance", "gpt-4.1-mini"),
("GPT-4o - Standard model with solid capabilities", "gpt-4o"), ("GPT-4o - Standard model with solid capabilities", "gpt-4o"),
@@ -170,13 +193,35 @@ def select_deep_thinking_agent() -> str:
("o3-mini - Advanced reasoning model (lightweight)", "o3-mini"), ("o3-mini - Advanced reasoning model (lightweight)", "o3-mini"),
("o3 - Full advanced reasoning model", "o3"), ("o3 - Full advanced reasoning model", "o3"),
("o1 - Premier reasoning and problem-solving model", "o1"), ("o1 - Premier reasoning and problem-solving model", "o1"),
],
"anthropic": [
("Claude Haiku 3.5 - Fast inference and standard capabilities", "claude-3-5-haiku-latest"),
("Claude Sonnet 3.5 - Highly capable standard model", "claude-3-5-sonnet-latest"),
("Claude Sonnet 3.7 - Exceptional hybrid reasoning and agentic capabilities", "claude-3-7-sonnet-latest"),
("Claude Sonnet 4 - High performance and excellent reasoning", "claude-sonnet-4-0"),
("Claude Opus 4 - Most powerful Anthropic model", " claude-opus-4-0"),
],
"google": [
("Gemini 2.0 Flash-Lite - Cost efficiency and low latency", "gemini-2.0-flash-lite"),
("Gemini 2.0 Flash - Next generation features, speed, and thinking", "gemini-2.0-flash"),
("Gemini 2.5 Flash - Adaptive thinking, cost efficiency", "gemini-2.5-flash-preview-05-20"),
("Gemini 2.5 Pro", "gemini-2.5-pro-preview-06-05"),
],
"openrouter": [
("DeepSeek V3 - a 685B-parameter, mixture-of-experts model", "deepseek/deepseek-chat-v3-0324:free"),
("Deepseek - latest iteration of the flagship chat model family from the DeepSeek team.", "deepseek/deepseek-chat-v3-0324:free"),
],
"ollama": [
("llama3.1 local", "llama3.1"),
("qwen3", "qwen3"),
] ]
}
choice = questionary.select( choice = questionary.select(
"Select Your [Deep-Thinking LLM Engine]:", "Select Your [Deep-Thinking LLM Engine]:",
choices=[ choices=[
questionary.Choice(display, value=value) questionary.Choice(display, value=value)
for display, value in DEEP_AGENT_OPTIONS for display, value in DEEP_AGENT_OPTIONS[provider.lower()]
], ],
instruction="\n- Use arrow keys to navigate\n- Press Enter to select", instruction="\n- Use arrow keys to navigate\n- Press Enter to select",
style=questionary.Style( style=questionary.Style(
@@ -193,3 +238,39 @@ def select_deep_thinking_agent() -> str:
exit(1) exit(1)
return choice return choice
def select_llm_provider() -> tuple[str, str]:
"""Select the OpenAI api url using interactive selection."""
# Define OpenAI api options with their corresponding endpoints
BASE_URLS = [
("OpenAI", "https://api.openai.com/v1"),
("Anthropic", "https://api.anthropic.com/"),
("Google", "https://generativelanguage.googleapis.com/v1"),
("Openrouter", "https://openrouter.ai/api/v1"),
("Ollama", "http://localhost:11434/v1"),
]
choice = questionary.select(
"Select your LLM Provider:",
choices=[
questionary.Choice(display, value=(display, value))
for display, value in BASE_URLS
],
instruction="\n- Use arrow keys to navigate\n- Press Enter to select",
style=questionary.Style(
[
("selected", "fg:magenta noinherit"),
("highlighted", "fg:magenta noinherit"),
("pointer", "fg:magenta noinherit"),
]
),
).ask()
if choice is None:
console.print("\n[red]no OpenAI backend selected. Exiting...[/red]")
exit(1)
display_name, url = choice
print(f"You selected: {display_name}\tURL: {url}")
return display_name, url

View File

@@ -3,8 +3,10 @@ from tradingagents.default_config import DEFAULT_CONFIG
# Create a custom config # Create a custom config
config = DEFAULT_CONFIG.copy() config = DEFAULT_CONFIG.copy()
config["deep_think_llm"] = "gpt-4.1-nano" # Use a different model config["llm_provider"] = "google" # Use a different model
config["quick_think_llm"] = "gpt-4.1-nano" # Use a different model config["backend_url"] = "https://generativelanguage.googleapis.com/v1" # Use a different backend
config["deep_think_llm"] = "gemini-2.0-flash" # Use a different model
config["quick_think_llm"] = "gemini-2.0-flash" # Use a different model
config["max_debate_rounds"] = 1 # Increase debate rounds config["max_debate_rounds"] = 1 # Increase debate rounds
config["online_tools"] = True # Increase debate rounds config["online_tools"] = True # Increase debate rounds

34
pyproject.toml Normal file
View File

@@ -0,0 +1,34 @@
[project]
name = "tradingagents"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
"akshare>=1.16.98",
"backtrader>=1.9.78.123",
"chainlit>=2.5.5",
"chromadb>=1.0.12",
"eodhd>=1.0.32",
"feedparser>=6.0.11",
"finnhub-python>=2.4.23",
"langchain-anthropic>=0.3.15",
"langchain-experimental>=0.3.4",
"langchain-google-genai>=2.1.5",
"langchain-openai>=0.3.23",
"langgraph>=0.4.8",
"pandas>=2.3.0",
"parsel>=1.10.0",
"praw>=7.8.1",
"pytz>=2025.2",
"questionary>=2.1.0",
"redis>=6.2.0",
"requests>=2.32.4",
"rich>=14.0.0",
"setuptools>=80.9.0",
"stockstats>=0.6.5",
"tqdm>=4.67.1",
"tushare>=1.4.21",
"typing-extensions>=4.14.0",
"yfinance>=0.2.63",
]

View File

@@ -22,3 +22,5 @@ redis
chainlit chainlit
rich rich
questionary questionary
langchain_anthropic
langchain-google-genai

View File

@@ -22,7 +22,7 @@ def create_fundamentals_analyst(llm, toolkit):
system_message = ( system_message = (
"You are a researcher tasked with analyzing fundamental information over the past week about a company. Please write a comprehensive report of the company's fundamental information such as financial documents, company profile, basic company financials, company financial history, insider sentiment and insider transactions to gain a full view of the company's fundamental information to inform traders. Make sure to include as much detail as possible. Do not simply state the trends are mixed, provide detailed and finegrained analysis and insights that may help traders make decisions." "You are a researcher tasked with analyzing fundamental information over the past week about a company. Please write a comprehensive report of the company's fundamental information such as financial documents, company profile, basic company financials, company financial history, insider sentiment and insider transactions to gain a full view of the company's fundamental information to inform traders. Make sure to include as much detail as possible. Do not simply state the trends are mixed, provide detailed and finegrained analysis and insights that may help traders make decisions."
+ " Make sure to append a Makrdown table at the end of the report to organize key points in the report, organized and easy to read.", + " Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read.",
) )
prompt = ChatPromptTemplate.from_messages( prompt = ChatPromptTemplate.from_messages(
@@ -51,9 +51,14 @@ def create_fundamentals_analyst(llm, toolkit):
result = chain.invoke(state["messages"]) result = chain.invoke(state["messages"])
report = ""
if len(result.tool_calls) == 0:
report = result.content
return { return {
"messages": [result], "messages": [result],
"fundamentals_report": result.content, "fundamentals_report": report,
} }
return fundamentals_analyst_node return fundamentals_analyst_node

View File

@@ -76,9 +76,14 @@ Volume-Based Indicators:
result = chain.invoke(state["messages"]) result = chain.invoke(state["messages"])
report = ""
if len(result.tool_calls) == 0:
report = result.content
return { return {
"messages": [result], "messages": [result],
"market_report": result.content, "market_report": report,
} }
return market_analyst_node return market_analyst_node

View File

@@ -47,9 +47,14 @@ def create_news_analyst(llm, toolkit):
chain = prompt | llm.bind_tools(tools) chain = prompt | llm.bind_tools(tools)
result = chain.invoke(state["messages"]) result = chain.invoke(state["messages"])
report = ""
if len(result.tool_calls) == 0:
report = result.content
return { return {
"messages": [result], "messages": [result],
"news_report": result.content, "news_report": report,
} }
return news_analyst_node return news_analyst_node

View File

@@ -47,9 +47,14 @@ def create_social_media_analyst(llm, toolkit):
result = chain.invoke(state["messages"]) result = chain.invoke(state["messages"])
report = ""
if len(result.tool_calls) == 0:
report = result.content
return { return {
"messages": [result], "messages": [result],
"sentiment_report": result.content, "sentiment_report": report,
} }
return social_media_analyst_node return social_media_analyst_node

View File

@@ -12,13 +12,21 @@ from dateutil.relativedelta import relativedelta
from langchain_openai import ChatOpenAI from langchain_openai import ChatOpenAI
import tradingagents.dataflows.interface as interface import tradingagents.dataflows.interface as interface
from tradingagents.default_config import DEFAULT_CONFIG from tradingagents.default_config import DEFAULT_CONFIG
from langchain_core.messages import HumanMessage
def create_msg_delete(): def create_msg_delete():
def delete_messages(state): def delete_messages(state):
"""To prevent message history from overflowing, regularly clear message history after a stage of the pipeline is done""" """Clear messages and add placeholder for Anthropic compatibility"""
messages = state["messages"] messages = state["messages"]
return {"messages": [RemoveMessage(id=m.id) for m in messages]}
# Remove all messages
removal_operations = [RemoveMessage(id=m.id) for m in messages]
# Add a minimal placeholder message
placeholder = HumanMessage(content="Continue")
return {"messages": removal_operations + [placeholder]}
return delete_messages return delete_messages
@@ -116,7 +124,7 @@ class Toolkit:
def get_YFin_data( def get_YFin_data(
symbol: Annotated[str, "ticker symbol of the company"], symbol: Annotated[str, "ticker symbol of the company"],
start_date: Annotated[str, "Start date in yyyy-mm-dd format"], start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
end_date: Annotated[str, "Start date in yyyy-mm-dd format"], end_date: Annotated[str, "End date in yyyy-mm-dd format"],
) -> str: ) -> str:
""" """
Retrieve the stock price data for a given ticker symbol from Yahoo Finance. Retrieve the stock price data for a given ticker symbol from Yahoo Finance.
@@ -137,7 +145,7 @@ class Toolkit:
def get_YFin_data_online( def get_YFin_data_online(
symbol: Annotated[str, "ticker symbol of the company"], symbol: Annotated[str, "ticker symbol of the company"],
start_date: Annotated[str, "Start date in yyyy-mm-dd format"], start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
end_date: Annotated[str, "Start date in yyyy-mm-dd format"], end_date: Annotated[str, "End date in yyyy-mm-dd format"],
) -> str: ) -> str:
""" """
Retrieve the stock price data for a given ticker symbol from Yahoo Finance. Retrieve the stock price data for a given ticker symbol from Yahoo Finance.

View File

@@ -1,19 +1,23 @@
import chromadb import chromadb
from chromadb.config import Settings from chromadb.config import Settings
from openai import OpenAI from openai import OpenAI
import numpy as np
class FinancialSituationMemory: class FinancialSituationMemory:
def __init__(self, name): def __init__(self, name, config):
self.client = OpenAI() if config["backend_url"] == "http://localhost:11434/v1":
self.embedding = "nomic-embed-text"
else:
self.embedding = "text-embedding-3-small"
self.client = OpenAI(base_url=config["backend_url"])
self.chroma_client = chromadb.Client(Settings(allow_reset=True)) self.chroma_client = chromadb.Client(Settings(allow_reset=True))
self.situation_collection = self.chroma_client.create_collection(name=name) self.situation_collection = self.chroma_client.create_collection(name=name)
def get_embedding(self, text): def get_embedding(self, text):
"""Get OpenAI embedding for a text""" """Get OpenAI embedding for a text"""
response = self.client.embeddings.create( response = self.client.embeddings.create(
model="text-embedding-ada-002", input=text model=self.embedding, input=text
) )
return response.data[0].embedding return response.data[0].embedding

View File

@@ -628,7 +628,7 @@ def get_YFin_data_window(
def get_YFin_data_online( def get_YFin_data_online(
symbol: Annotated[str, "ticker symbol of the company"], symbol: Annotated[str, "ticker symbol of the company"],
start_date: Annotated[str, "Start date in yyyy-mm-dd format"], start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
end_date: Annotated[str, "Start date in yyyy-mm-dd format"], end_date: Annotated[str, "End date in yyyy-mm-dd format"],
): ):
datetime.strptime(start_date, "%Y-%m-%d") datetime.strptime(start_date, "%Y-%m-%d")
@@ -670,7 +670,7 @@ def get_YFin_data_online(
def get_YFin_data( def get_YFin_data(
symbol: Annotated[str, "ticker symbol of the company"], symbol: Annotated[str, "ticker symbol of the company"],
start_date: Annotated[str, "Start date in yyyy-mm-dd format"], start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
end_date: Annotated[str, "Start date in yyyy-mm-dd format"], end_date: Annotated[str, "End date in yyyy-mm-dd format"],
) -> str: ) -> str:
# read in data # read in data
data = pd.read_csv( data = pd.read_csv(
@@ -703,17 +703,18 @@ def get_YFin_data(
def get_stock_news_openai(ticker, curr_date): def get_stock_news_openai(ticker, curr_date):
client = OpenAI() config = get_config()
client = OpenAI(base_url=config["backend_url"])
response = client.responses.create( response = client.responses.create(
model="gpt-4.1-mini", model=config["quick_think_llm"],
input=[ input=[
{ {
"role": "system", "role": "system",
"content": [ "content": [
{ {
"type": "input_text", "type": "input_text",
"text": f"Can you search Social Media for {ticker} on TSLA from 7 days before {curr_date} to {curr_date}? Make sure you only get the data posted during that period.", "text": f"Can you search Social Media for {ticker} from 7 days before {curr_date} to {curr_date}? Make sure you only get the data posted during that period.",
} }
], ],
} }
@@ -737,10 +738,11 @@ def get_stock_news_openai(ticker, curr_date):
def get_global_news_openai(curr_date): def get_global_news_openai(curr_date):
client = OpenAI() config = get_config()
client = OpenAI(base_url=config["backend_url"])
response = client.responses.create( response = client.responses.create(
model="gpt-4.1-mini", model=config["quick_think_llm"],
input=[ input=[
{ {
"role": "system", "role": "system",
@@ -771,10 +773,11 @@ def get_global_news_openai(curr_date):
def get_fundamentals_openai(ticker, curr_date): def get_fundamentals_openai(ticker, curr_date):
client = OpenAI() config = get_config()
client = OpenAI(base_url=config["backend_url"])
response = client.responses.create( response = client.responses.create(
model="gpt-4.1-mini", model=config["quick_think_llm"],
input=[ input=[
{ {
"role": "system", "role": "system",

View File

@@ -2,14 +2,17 @@ import os
DEFAULT_CONFIG = { DEFAULT_CONFIG = {
"project_dir": os.path.abspath(os.path.join(os.path.dirname(__file__), ".")), "project_dir": os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
"results_dir": os.getenv("TRADINGAGENTS_RESULTS_DIR", "./results"),
"data_dir": "/Users/yluo/Documents/Code/ScAI/FR1-data", "data_dir": "/Users/yluo/Documents/Code/ScAI/FR1-data",
"data_cache_dir": os.path.join( "data_cache_dir": os.path.join(
os.path.abspath(os.path.join(os.path.dirname(__file__), ".")), os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
"dataflows/data_cache", "dataflows/data_cache",
), ),
# LLM settings # LLM settings
"llm_provider": "openai",
"deep_think_llm": "o4-mini", "deep_think_llm": "o4-mini",
"quick_think_llm": "gpt-4o-mini", "quick_think_llm": "gpt-4o-mini",
"backend_url": "https://api.openai.com/v1",
# Debate and discussion settings # Debate and discussion settings
"max_debate_rounds": 1, "max_debate_rounds": 1,
"max_risk_discuss_rounds": 1, "max_risk_discuss_rounds": 1,

View File

@@ -7,6 +7,9 @@ from datetime import date
from typing import Dict, Any, Tuple, List, Optional from typing import Dict, Any, Tuple, List, Optional
from langchain_openai import ChatOpenAI from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.prebuilt import ToolNode from langgraph.prebuilt import ToolNode
from tradingagents.agents import * from tradingagents.agents import *
@@ -55,18 +58,26 @@ class TradingAgentsGraph:
) )
# Initialize LLMs # Initialize LLMs
self.deep_thinking_llm = ChatOpenAI(model=self.config["deep_think_llm"]) if self.config["llm_provider"].lower() == "openai" or self.config["llm_provider"] == "ollama" or self.config["llm_provider"] == "openrouter":
self.quick_thinking_llm = ChatOpenAI( self.deep_thinking_llm = ChatOpenAI(model=self.config["deep_think_llm"], base_url=self.config["backend_url"])
model=self.config["quick_think_llm"], temperature=0.1 self.quick_thinking_llm = ChatOpenAI(model=self.config["quick_think_llm"], base_url=self.config["backend_url"])
) elif self.config["llm_provider"].lower() == "anthropic":
self.deep_thinking_llm = ChatAnthropic(model=self.config["deep_think_llm"], base_url=self.config["backend_url"])
self.quick_thinking_llm = ChatAnthropic(model=self.config["quick_think_llm"], base_url=self.config["backend_url"])
elif self.config["llm_provider"].lower() == "google":
self.deep_thinking_llm = ChatGoogleGenerativeAI(model=self.config["deep_think_llm"])
self.quick_thinking_llm = ChatGoogleGenerativeAI(model=self.config["quick_think_llm"])
else:
raise ValueError(f"Unsupported LLM provider: {self.config['llm_provider']}")
self.toolkit = Toolkit(config=self.config) self.toolkit = Toolkit(config=self.config)
# Initialize memories # Initialize memories
self.bull_memory = FinancialSituationMemory("bull_memory") self.bull_memory = FinancialSituationMemory("bull_memory", self.config)
self.bear_memory = FinancialSituationMemory("bear_memory") self.bear_memory = FinancialSituationMemory("bear_memory", self.config)
self.trader_memory = FinancialSituationMemory("trader_memory") self.trader_memory = FinancialSituationMemory("trader_memory", self.config)
self.invest_judge_memory = FinancialSituationMemory("invest_judge_memory") self.invest_judge_memory = FinancialSituationMemory("invest_judge_memory", self.config)
self.risk_manager_memory = FinancialSituationMemory("risk_manager_memory") self.risk_manager_memory = FinancialSituationMemory("risk_manager_memory", self.config)
# Create tool nodes # Create tool nodes
self.tool_nodes = self._create_tool_nodes() self.tool_nodes = self._create_tool_nodes()

5405
uv.lock generated Normal file

File diff suppressed because it is too large Load Diff