Docker support and Ollama support (#47)
- Added support for running CLI and Ollama server via Docker - Introduced tests for local embeddings model and standalone Docker setup - Enabled conditional Ollama server launch via LLM_PROVIDER
This commit is contained in:
185
tests/README.md
Normal file
185
tests/README.md
Normal file
@@ -0,0 +1,185 @@
|
||||
# TradingAgents Test Suite
|
||||
|
||||
This directory contains all test scripts for validating the TradingAgents setup and configuration.
|
||||
|
||||
## Test Scripts
|
||||
|
||||
### 🧪 `run_tests.py` - Automated Test Runner
|
||||
**Purpose**: Automatically detects your LLM provider and runs appropriate tests.
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
# Run all tests (auto-detects provider from LLM_PROVIDER env var)
|
||||
# Always run from project root, not from tests/ directory
|
||||
python tests/run_tests.py
|
||||
|
||||
# In Docker
|
||||
docker compose --profile openai run --rm app-openai python tests/run_tests.py
|
||||
docker compose --profile ollama exec app-ollama python tests/run_tests.py
|
||||
```
|
||||
|
||||
**Important**: Always run the test runner from the **project root directory**, not from inside the `tests/` directory. The runner automatically handles path resolution and changes to the correct working directory.
|
||||
|
||||
**Features**:
|
||||
- Auto-detects LLM provider from environment
|
||||
- Runs provider-specific tests only
|
||||
- Provides comprehensive test summary
|
||||
- Handles timeouts and error reporting
|
||||
|
||||
---
|
||||
|
||||
### 🔌 `test_openai_connection.py` - OpenAI API Tests
|
||||
**Purpose**: Validates OpenAI API connectivity and functionality.
|
||||
|
||||
**Tests**:
|
||||
- ✅ API key validation
|
||||
- ✅ Chat completion (using `gpt-4o-mini`)
|
||||
- ✅ Embeddings (using `text-embedding-3-small`)
|
||||
- ✅ Configuration validation
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
# From project root
|
||||
python tests/test_openai_connection.py
|
||||
|
||||
# In Docker
|
||||
docker compose --profile openai run --rm app-openai python tests/test_openai_connection.py
|
||||
```
|
||||
|
||||
**Requirements**:
|
||||
- `OPENAI_API_KEY` environment variable
|
||||
- `LLM_PROVIDER=openai`
|
||||
|
||||
---
|
||||
|
||||
### 🦙 `test_ollama_connection.py` - Ollama Connectivity Tests
|
||||
**Purpose**: Validates Ollama server connectivity and model availability.
|
||||
|
||||
**Tests**:
|
||||
- ✅ Ollama API accessibility
|
||||
- ✅ Model availability (`qwen3:0.6b`, `nomic-embed-text`)
|
||||
- ✅ OpenAI-compatible API functionality
|
||||
- ✅ Chat completion and embeddings
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
# From project root
|
||||
python tests/test_ollama_connection.py
|
||||
|
||||
# In Docker
|
||||
docker compose --profile ollama exec app-ollama python tests/test_ollama_connection.py
|
||||
```
|
||||
|
||||
**Requirements**:
|
||||
- Ollama server running
|
||||
- Required models downloaded
|
||||
- `LLM_PROVIDER=ollama`
|
||||
|
||||
---
|
||||
|
||||
### ⚙️ `test_setup.py` - General Setup Validation
|
||||
**Purpose**: Validates basic TradingAgents setup and configuration.
|
||||
|
||||
**Tests**:
|
||||
- ✅ Python package imports
|
||||
- ✅ Configuration loading
|
||||
- ✅ TradingAgentsGraph initialization
|
||||
- ✅ Data access capabilities
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
# From project root
|
||||
python tests/test_setup.py
|
||||
|
||||
# In Docker
|
||||
docker compose --profile openai run --rm app-openai python tests/test_setup.py
|
||||
docker compose --profile ollama exec app-ollama python tests/test_setup.py
|
||||
```
|
||||
|
||||
**Requirements**:
|
||||
- TradingAgents dependencies installed
|
||||
- Basic environment configuration
|
||||
|
||||
---
|
||||
|
||||
## Test Results Interpretation
|
||||
|
||||
### ✅ Success Indicators
|
||||
- All tests pass
|
||||
- API connections established
|
||||
- Models available and responding
|
||||
- Configuration properly loaded
|
||||
|
||||
### ❌ Common Issues
|
||||
|
||||
**OpenAI Tests Failing**:
|
||||
- Check `OPENAI_API_KEY` is set correctly
|
||||
- Verify API key has sufficient quota
|
||||
- Ensure internet connectivity
|
||||
|
||||
**Ollama Tests Failing**:
|
||||
- Verify Ollama service is running
|
||||
- Check if models are downloaded (`./init-ollama.sh`)
|
||||
- Confirm `ollama list` shows required models
|
||||
|
||||
**Setup Tests Failing**:
|
||||
- Check Python dependencies are installed
|
||||
- Verify environment variables are set
|
||||
- Ensure `.env` file is properly configured
|
||||
|
||||
---
|
||||
|
||||
## Quick Testing Commands
|
||||
|
||||
**⚠️ Important**: Always run these commands from the **project root directory** (not from inside `tests/`):
|
||||
|
||||
```bash
|
||||
# Test everything automatically (from project root)
|
||||
python tests/run_tests.py
|
||||
|
||||
# Test specific provider (from project root)
|
||||
LLM_PROVIDER=openai python tests/run_tests.py
|
||||
LLM_PROVIDER=ollama python tests/run_tests.py
|
||||
|
||||
# Test individual components (from project root)
|
||||
python tests/test_openai_connection.py
|
||||
python tests/test_ollama_connection.py
|
||||
python tests/test_setup.py
|
||||
```
|
||||
|
||||
**Why from project root?**
|
||||
- Tests need to import the `tradingagents` package
|
||||
- The `tradingagents` package is located in the project root
|
||||
- Running from `tests/` directory would cause import errors
|
||||
|
||||
---
|
||||
|
||||
## Adding New Tests
|
||||
|
||||
To add new tests:
|
||||
|
||||
1. Create new test script in `tests/` directory
|
||||
2. Follow the naming convention: `test_<component>.py`
|
||||
3. Include proper error handling and status reporting
|
||||
4. Update `run_tests.py` if automatic detection is needed
|
||||
5. Document the test in this README
|
||||
|
||||
**Test Script Template**:
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""Test script for <component>"""
|
||||
|
||||
def test_component():
|
||||
"""Test <component> functionality."""
|
||||
try:
|
||||
# Test implementation
|
||||
print("✅ Test passed")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"❌ Test failed: {e}")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = test_component()
|
||||
exit(0 if success else 1)
|
||||
```
|
||||
10
tests/__init__.py
Normal file
10
tests/__init__.py
Normal file
@@ -0,0 +1,10 @@
|
||||
"""
|
||||
TradingAgents Test Suite
|
||||
|
||||
This package contains all test scripts for the TradingAgents application:
|
||||
- test_openai_connection.py: OpenAI API connectivity tests
|
||||
- test_ollama_connection.py: Ollama connectivity tests
|
||||
- test_setup.py: General setup and configuration tests
|
||||
"""
|
||||
|
||||
__version__ = "1.0.0"
|
||||
101
tests/run_tests.py
Normal file
101
tests/run_tests.py
Normal file
@@ -0,0 +1,101 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test runner script for TradingAgents
|
||||
|
||||
This script automatically detects the LLM provider and runs appropriate tests.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import subprocess
|
||||
|
||||
def get_llm_provider():
|
||||
"""Get the configured LLM provider from environment."""
|
||||
return os.environ.get("LLM_PROVIDER", "").lower()
|
||||
|
||||
def run_test_script(script_name):
|
||||
"""Run a test script and return success status."""
|
||||
try:
|
||||
print(f"🧪 Running {script_name}...")
|
||||
result = subprocess.run([sys.executable, script_name],
|
||||
capture_output=True, text=True, timeout=120)
|
||||
|
||||
if result.returncode == 0:
|
||||
print(f"✅ {script_name} passed")
|
||||
if result.stdout:
|
||||
print(f" Output: {result.stdout.strip()}")
|
||||
return True
|
||||
else:
|
||||
print(f"❌ {script_name} failed")
|
||||
if result.stderr:
|
||||
print(f" Error: {result.stderr.strip()}")
|
||||
return False
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
print(f"⏰ {script_name} timed out")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"💥 {script_name} crashed: {e}")
|
||||
return False
|
||||
|
||||
def main():
|
||||
"""Main test runner function."""
|
||||
print("🚀 TradingAgents Test Runner")
|
||||
print("=" * 50)
|
||||
|
||||
# Get project root directory (parent of tests directory)
|
||||
tests_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
project_root = os.path.dirname(tests_dir)
|
||||
os.chdir(project_root)
|
||||
|
||||
provider = get_llm_provider()
|
||||
print(f"📋 Detected LLM Provider: {provider or 'not set'}")
|
||||
|
||||
tests_run = []
|
||||
tests_passed = []
|
||||
|
||||
# Always run setup tests
|
||||
if run_test_script("tests/test_setup.py"):
|
||||
tests_passed.append("tests/test_setup.py")
|
||||
tests_run.append("tests/test_setup.py")
|
||||
|
||||
# Run provider-specific tests
|
||||
if provider == "openai":
|
||||
print("\n🔍 Running OpenAI-specific tests...")
|
||||
if run_test_script("tests/test_openai_connection.py"):
|
||||
tests_passed.append("tests/test_openai_connection.py")
|
||||
tests_run.append("tests/test_openai_connection.py")
|
||||
|
||||
elif provider == "ollama":
|
||||
print("\n🔍 Running Ollama-specific tests...")
|
||||
if run_test_script("tests/test_ollama_connection.py"):
|
||||
tests_passed.append("tests/test_ollama_connection.py")
|
||||
tests_run.append("tests/test_ollama_connection.py")
|
||||
|
||||
else:
|
||||
print(f"\n⚠️ Unknown or unset LLM provider: '{provider}'")
|
||||
print(" Running all connectivity tests...")
|
||||
|
||||
for test_script in ["tests/test_openai_connection.py", "tests/test_ollama_connection.py"]:
|
||||
if run_test_script(test_script):
|
||||
tests_passed.append(test_script)
|
||||
tests_run.append(test_script)
|
||||
|
||||
# Summary
|
||||
print("\n" + "=" * 50)
|
||||
print(f"📊 Test Results: {len(tests_passed)}/{len(tests_run)} tests passed")
|
||||
|
||||
for test in tests_run:
|
||||
status = "✅ PASS" if test in tests_passed else "❌ FAIL"
|
||||
print(f" {test}: {status}")
|
||||
|
||||
if len(tests_passed) == len(tests_run):
|
||||
print("\n🎉 All tests passed! TradingAgents is ready to use.")
|
||||
return 0
|
||||
else:
|
||||
print(f"\n⚠️ {len(tests_run) - len(tests_passed)} test(s) failed. Check configuration.")
|
||||
return 1
|
||||
|
||||
if __name__ == "__main__":
|
||||
exit_code = main()
|
||||
sys.exit(exit_code)
|
||||
108
tests/test_ollama_connection.py
Normal file
108
tests/test_ollama_connection.py
Normal file
@@ -0,0 +1,108 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Simple test script to verify Ollama connection is working.
|
||||
"""
|
||||
|
||||
import os
|
||||
import requests
|
||||
import time
|
||||
from openai import OpenAI
|
||||
|
||||
def test_ollama_connection():
|
||||
"""Test if Ollama is accessible and responding."""
|
||||
|
||||
# Get configuration from environment
|
||||
backend_url = os.environ.get("LLM_BACKEND_URL", "http://localhost:11434/v1")
|
||||
model = os.environ.get("LLM_DEEP_THINK_MODEL", "qwen3:0.6b")
|
||||
embedding_model = os.environ.get("LLM_EMBEDDING_MODEL", "nomic-embed-text")
|
||||
|
||||
print(f"Testing Ollama connection:")
|
||||
print(f" Backend URL: {backend_url}")
|
||||
print(f" Model: {model}")
|
||||
print(f" Embedding Model: {embedding_model}")
|
||||
|
||||
# Test 1: Check if Ollama API is responding
|
||||
try:
|
||||
response = requests.get(f"{backend_url.replace('/v1', '')}/api/tags", timeout=10)
|
||||
if response.status_code == 200:
|
||||
print("✅ Ollama API is responding")
|
||||
else:
|
||||
print(f"❌ Ollama API returned status code: {response.status_code}")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"❌ Failed to connect to Ollama API: {e}")
|
||||
return False
|
||||
|
||||
# Test 2: Check if the model is available
|
||||
try:
|
||||
response = requests.get(f"{backend_url.replace('/v1', '')}/api/tags", timeout=10)
|
||||
models = response.json().get("models", [])
|
||||
model_names = [m.get("name", "") for m in models]
|
||||
|
||||
if any(name.startswith(model) for name in model_names):
|
||||
print(f"✅ Model '{model}' is available")
|
||||
else:
|
||||
print(f"❌ Model '{model}' not found. Available models: {model_names}")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"❌ Failed to check model availability: {e}")
|
||||
return False
|
||||
|
||||
# Test 3: Test OpenAI-compatible API
|
||||
try:
|
||||
client = OpenAI(base_url=backend_url, api_key="dummy")
|
||||
response = client.chat.completions.create(
|
||||
model=model,
|
||||
messages=[{"role": "user", "content": "Hello, say 'test successful'"}],
|
||||
max_tokens=50
|
||||
)
|
||||
print("✅ OpenAI-compatible API is working")
|
||||
print(f" Response: {response.choices[0].message.content}")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"❌ OpenAI-compatible API test failed: {e}")
|
||||
return False
|
||||
|
||||
# Test 4: Check if the embedding model is available
|
||||
try:
|
||||
response = requests.get(f"{backend_url.replace('/v1', '')}/api/tags", timeout=10)
|
||||
models = response.json().get("models", [])
|
||||
model_names = [m.get("name") for m in models if m.get("name")]
|
||||
|
||||
# Check if any of the available models starts with the embedding model name
|
||||
if any(name.startswith(embedding_model) for name in model_names):
|
||||
print(f"✅ Embedding Model '{embedding_model}' is available")
|
||||
else:
|
||||
print(f"❌ Embedding Model '{embedding_model}' not found. Available models: {model_names}")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"❌ Failed to check embedding model availability: {e}")
|
||||
return False
|
||||
|
||||
# Test 5: Test OpenAI-compatible embedding API
|
||||
try:
|
||||
client = OpenAI(base_url=backend_url, api_key="dummy")
|
||||
response = client.embeddings.create(
|
||||
model=embedding_model,
|
||||
input="This is a test sentence.",
|
||||
encoding_format="float"
|
||||
)
|
||||
if response.data and len(response.data) > 0 and response.data[0].embedding:
|
||||
print("✅ OpenAI-compatible embedding API is working")
|
||||
print(f" Successfully generated embedding of dimension: {len(response.data[0].embedding)}")
|
||||
return True
|
||||
else:
|
||||
print("❌ Embedding API test failed: No embedding data in response")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"❌ OpenAI-compatible embedding API test failed: {e}")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = test_ollama_connection()
|
||||
if success:
|
||||
print("\n🎉 All tests passed! Ollama is ready.")
|
||||
exit(0)
|
||||
else:
|
||||
print("\n💥 Tests failed! Check Ollama configuration.")
|
||||
exit(1)
|
||||
142
tests/test_openai_connection.py
Normal file
142
tests/test_openai_connection.py
Normal file
@@ -0,0 +1,142 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script to verify OpenAI API connection is working.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
from openai import OpenAI
|
||||
|
||||
def test_openai_connection():
|
||||
"""Test if OpenAI API is accessible and responding."""
|
||||
|
||||
# Get configuration from environment
|
||||
api_key = os.environ.get("OPENAI_API_KEY")
|
||||
backend_url = os.environ.get("LLM_BACKEND_URL", "https://api.openai.com/v1")
|
||||
provider = os.environ.get("LLM_PROVIDER", "openai")
|
||||
|
||||
print(f"Testing OpenAI API connection:")
|
||||
print(f" Provider: {provider}")
|
||||
print(f" Backend URL: {backend_url}")
|
||||
print(f" API Key: {'✅ Set' if api_key and api_key != '<your-openai-key>' else '❌ Not set or using placeholder'}")
|
||||
|
||||
if not api_key or api_key == "<your-openai-key>":
|
||||
print("❌ OPENAI_API_KEY is not set or still using placeholder value")
|
||||
print(" Please set your OpenAI API key in the .env file")
|
||||
return False
|
||||
|
||||
# Test 1: Initialize OpenAI client
|
||||
try:
|
||||
client = OpenAI(
|
||||
api_key=api_key,
|
||||
base_url=backend_url
|
||||
)
|
||||
print("✅ OpenAI client initialized successfully")
|
||||
except Exception as e:
|
||||
print(f"❌ Failed to initialize OpenAI client: {e}")
|
||||
return False
|
||||
|
||||
# Test 2: Test chat completion with a simple query
|
||||
try:
|
||||
print("🧪 Testing chat completion...")
|
||||
response = client.chat.completions.create(
|
||||
model="gpt-4o-mini", # Use the most cost-effective model for testing
|
||||
messages=[
|
||||
{"role": "user", "content": "Hello! Please respond with exactly: 'OpenAI API test successful'"}
|
||||
],
|
||||
max_tokens=50,
|
||||
temperature=0
|
||||
)
|
||||
|
||||
if response.choices and response.choices[0].message.content:
|
||||
content = response.choices[0].message.content.strip()
|
||||
print(f"✅ Chat completion successful")
|
||||
print(f" Model: {response.model}")
|
||||
print(f" Response: {content}")
|
||||
print(f" Tokens used: {response.usage.total_tokens if response.usage else 'unknown'}")
|
||||
else:
|
||||
print("❌ Chat completion returned empty response")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Chat completion test failed: {e}")
|
||||
if "insufficient_quota" in str(e).lower():
|
||||
print(" 💡 This might be a quota/billing issue. Check your OpenAI account.")
|
||||
elif "invalid_api_key" in str(e).lower():
|
||||
print(" 💡 Invalid API key. Please check your OPENAI_API_KEY.")
|
||||
return False
|
||||
|
||||
# Test 3: Test embeddings (optional, for completeness)
|
||||
try:
|
||||
print("🧪 Testing embeddings...")
|
||||
response = client.embeddings.create(
|
||||
model="text-embedding-3-small", # Cost-effective embedding model
|
||||
input="This is a test sentence for embeddings."
|
||||
)
|
||||
|
||||
if response.data and len(response.data) > 0 and response.data[0].embedding:
|
||||
embedding = response.data[0].embedding
|
||||
print(f"✅ Embeddings successful")
|
||||
print(f" Model: {response.model}")
|
||||
print(f" Embedding dimension: {len(embedding)}")
|
||||
print(f" Tokens used: {response.usage.total_tokens if response.usage else 'unknown'}")
|
||||
else:
|
||||
print("❌ Embeddings returned empty response")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Embeddings test failed: {e}")
|
||||
print(" ⚠️ Embeddings test failed but chat completion worked. This is usually fine for basic usage.")
|
||||
# Don't return False here as embeddings might not be critical for all use cases
|
||||
|
||||
return True
|
||||
|
||||
def test_config_validation():
|
||||
"""Validate the configuration is properly set for OpenAI."""
|
||||
|
||||
provider = os.environ.get("LLM_PROVIDER", "").lower()
|
||||
backend_url = os.environ.get("LLM_BACKEND_URL", "")
|
||||
|
||||
print("\n🔧 Configuration validation:")
|
||||
|
||||
if provider != "openai":
|
||||
print(f"⚠️ LLM_PROVIDER is '{provider}', expected 'openai'")
|
||||
print(" The app might still work if the provider supports OpenAI-compatible API")
|
||||
else:
|
||||
print("✅ LLM_PROVIDER correctly set to 'openai'")
|
||||
|
||||
if "openai.com" in backend_url:
|
||||
print("✅ Using official OpenAI API endpoint")
|
||||
elif backend_url:
|
||||
print(f"ℹ️ Using custom endpoint: {backend_url}")
|
||||
print(" Make sure this endpoint is OpenAI-compatible")
|
||||
else:
|
||||
print("⚠️ LLM_BACKEND_URL not set, using default")
|
||||
|
||||
# Check for common environment issues
|
||||
finnhub_key = os.environ.get("FINNHUB_API_KEY")
|
||||
if not finnhub_key or finnhub_key == "<your_finnhub_api_key_here>":
|
||||
print("⚠️ FINNHUB_API_KEY not set - financial data fetching may not work")
|
||||
else:
|
||||
print("✅ FINNHUB_API_KEY is set")
|
||||
|
||||
return True
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("🧪 OpenAI API Connection Test\n")
|
||||
|
||||
config_ok = test_config_validation()
|
||||
api_ok = test_openai_connection()
|
||||
|
||||
print(f"\n📊 Test Results:")
|
||||
print(f" Configuration: {'✅ OK' if config_ok else '❌ Issues'}")
|
||||
print(f" API Connection: {'✅ OK' if api_ok else '❌ Failed'}")
|
||||
|
||||
if config_ok and api_ok:
|
||||
print("\n🎉 All tests passed! OpenAI API is ready for TradingAgents.")
|
||||
print("💡 You can now run the trading agents with OpenAI as the LLM provider.")
|
||||
else:
|
||||
print("\n💥 Some tests failed. Please check your configuration and API key.")
|
||||
print("💡 Make sure OPENAI_API_KEY is set correctly in your .env file.")
|
||||
|
||||
sys.exit(0 if (config_ok and api_ok) else 1)
|
||||
122
tests/test_setup.py
Normal file
122
tests/test_setup.py
Normal file
@@ -0,0 +1,122 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script to verify the complete TradingAgents setup works end-to-end.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
def test_basic_setup():
|
||||
"""Test basic imports and configuration"""
|
||||
try:
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
print("✅ Basic imports successful")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"❌ Basic import failed: {e}")
|
||||
return False
|
||||
|
||||
def test_config():
|
||||
"""Test configuration loading"""
|
||||
try:
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
# Check required environment variables
|
||||
required_vars = ['LLM_PROVIDER', 'OPENAI_API_KEY', 'FINNHUB_API_KEY']
|
||||
missing_vars = []
|
||||
|
||||
for var in required_vars:
|
||||
if not os.environ.get(var):
|
||||
missing_vars.append(var)
|
||||
|
||||
if missing_vars:
|
||||
print(f"⚠️ Missing environment variables: {missing_vars}")
|
||||
print(" This may cause issues with data fetching or LLM calls")
|
||||
else:
|
||||
print("✅ Required environment variables set")
|
||||
|
||||
print(f"✅ Configuration loaded successfully")
|
||||
print(f" LLM Provider: {os.environ.get('LLM_PROVIDER', 'not set')}")
|
||||
print(f" OPENAI API KEY: {os.environ.get('OPENAI_API_KEY', 'not set')}")
|
||||
print(f" Backend URL: {os.environ.get('LLM_BACKEND_URL', 'not set')}")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"❌ Configuration test failed: {e}")
|
||||
return False
|
||||
|
||||
def test_trading_graph_init():
|
||||
"""Test TradingAgentsGraph initialization"""
|
||||
try:
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
# Create a minimal config for testing
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["online_tools"] = False # Use cached data for testing
|
||||
config["max_debate_rounds"] = 1 # Minimize API calls
|
||||
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
print("✅ TradingAgentsGraph initialized successfully")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"❌ TradingAgentsGraph initialization failed: {e}")
|
||||
return False
|
||||
|
||||
def test_data_access():
|
||||
"""Test if we can access basic data"""
|
||||
try:
|
||||
from tradingagents.dataflows.yfin_utils import get_stock_data
|
||||
|
||||
# Test with a simple stock query
|
||||
test_date = (datetime.now() - timedelta(days=30)).strftime('%Y-%m-%d')
|
||||
|
||||
# This should work even without API keys if using cached data
|
||||
data = get_stock_data("AAPL", test_date)
|
||||
|
||||
if data:
|
||||
print("✅ Data access test successful")
|
||||
return True
|
||||
else:
|
||||
print("⚠️ Data access returned empty results (may be expected with cached data)")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"❌ Data access test failed: {e}")
|
||||
return False
|
||||
|
||||
def run_all_tests():
|
||||
"""Run all tests"""
|
||||
print("🧪 Running TradingAgents setup tests...\n")
|
||||
|
||||
tests = [
|
||||
("Basic Setup", test_basic_setup),
|
||||
("Configuration", test_config),
|
||||
("TradingGraph Init", test_trading_graph_init),
|
||||
("Data Access", test_data_access),
|
||||
]
|
||||
|
||||
passed = 0
|
||||
total = len(tests)
|
||||
|
||||
for test_name, test_func in tests:
|
||||
print(f"Running {test_name} test...")
|
||||
try:
|
||||
if test_func():
|
||||
passed += 1
|
||||
print()
|
||||
except Exception as e:
|
||||
print(f"❌ {test_name} test crashed: {e}\n")
|
||||
|
||||
print(f"📊 Test Results: {passed}/{total} tests passed")
|
||||
|
||||
if passed == total:
|
||||
print("🎉 All tests passed! TradingAgents setup is working correctly.")
|
||||
return True
|
||||
else:
|
||||
print("⚠️ Some tests failed. Check the output above for details.")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = run_all_tests()
|
||||
sys.exit(0 if success else 1)
|
||||
Reference in New Issue
Block a user