Skip to content

devtrack/TradingAgents

 
 

Repository files navigation

arXiv Discord WeChat X Follow
Community

TradingAgents: Multi-Agents LLM Financial Trading Framework

🎉 TradingAgents officially released! We have received numerous inquiries about the work, and we would like to express our thanks for the enthusiasm in our community.

So we decided to fully open-source the framework. Looking forward to building impactful projects with you!

🚀 TradingAgents | ⚡ Installation & CLI | 🎬 Demo | 📦 Package Usage | 🤝 Contributing | 📄 Citation

TradingAgents Framework

TradingAgents is a multi-agent trading framework that mirrors the dynamics of real-world trading firms. By deploying specialized LLM-powered agents: from fundamental analysts, sentiment experts, and technical analysts, to trader, risk management team, the platform collaboratively evaluates market conditions and informs trading decisions. Moreover, these agents engage in dynamic discussions to pinpoint the optimal strategy.

TradingAgents framework is designed for research purposes. Trading performance may vary based on many factors, including the chosen backbone language models, model temperature, trading periods, the quality of data, and other non-deterministic factors. It is not intended as financial, investment, or trading advice.

Our framework decomposes complex trading tasks into specialized roles. This ensures the system achieves a robust, scalable approach to market analysis and decision-making.

Analyst Team

  • Fundamentals Analyst: Evaluates company financials and performance metrics, identifying intrinsic values and potential red flags.
  • Sentiment Analyst: Analyzes social media and public sentiment using sentiment scoring algorithms to gauge short-term market mood.
  • News Analyst: Monitors global news and macroeconomic indicators, interpreting the impact of events on market conditions.
  • Technical Analyst: Utilizes technical indicators (like MACD and RSI) to detect trading patterns and forecast price movements.

Researcher Team

  • Comprises both bullish and bearish researchers who critically assess the insights provided by the Analyst Team. Through structured debates, they balance potential gains against inherent risks.

Trader Agent

  • Composes reports from the analysts and researchers to make informed trading decisions. It determines the timing and magnitude of trades based on comprehensive market insights.

Risk Management and Portfolio Manager

  • Continuously evaluates portfolio risk by assessing market volatility, liquidity, and other risk factors. The risk management team evaluates and adjusts trading strategies, providing assessment reports to the Portfolio Manager for final decision.
  • The Portfolio Manager approves/rejects the transaction proposal. If approved, the order will be sent to the simulated exchange and executed.

Installation and CLI

Installation

Clone TradingAgents:

git clone https://github.com/devtrack/TradingAgents.git
cd TradingAgents

Create a virtual environment in any of your favorite environment managers:

conda create -n tradingagents python=3.13
conda activate tradingagents

Install dependencies:

pip install -r requirements.txt

Required APIs

You will also need the FMP API for financial data. All of our code is implemented with the free tier. For LLM access, connect with your OpenAI account via the login flow (voir ci-dessous) plutôt que de fournir une clé OPENAI_API_KEY.

set FMP_API_KEY=YOUR_FMP_API_KEY

By default the framework uses FMP for all financial data. If you prefer to use Finnhub instead, set financial_data_provider to finnhub in your runtime configuration.

Authentification et découverte des modèles

Pour utiliser le CLI, connectez-vous au backend d'authentification (device-code OAuth).

  • Prérequis : disposer d'un compte autorisé sur l'API visée. Les URL et identifiants par défaut peuvent être personnalisés via les variables d'environnement TRADINGAGENTS_AUTH_BASE_URL, TRADINGAGENTS_CLIENT_ID et TRADINGAGENTS_SCOPE.
  • Connexion : lancez python -m cli.main login (ou tradingagents login si installé comme paquet). La commande affiche l'URL de vérification et ouvre le navigateur (désactivez-le avec --open-browser False).
  • Stockage des tokens : le client persiste d'abord les jetons dans le system keyring. Si le keyring n'est pas disponible, ils sont stockés dans ~/.tradingagents/auth_tokens.json avec des permissions restreintes (0600).
  • Lister/choisir un modèle : après connexion, exécutez python -m cli.main models list pour récupérer le catalogue (models list). La table affiche les fournisseurs, les modèles disponibles et leurs capacités pour vous aider à sélectionner les modèles à utiliser dans vos configurations.

Check-list de test rapide

  1. Initialiser l'environnement

    • Créez et activez un environnement virtuel (conda ou venv), puis installez les dépendances : pip install -r requirements.txt.
    • Exportez la clé FMP_API_KEY (ou configurez financial_data_provider=finnhub). L'accès LLM se fait via votre compte OpenAI connecté avec python -m cli.main login, sans clé OPENAI_API_KEY.
  2. Tester l'authentification device-code

    • Lancez python -m cli.main login --open-browser False pour afficher l'URL et le code de vérification.
    • Saisissez le code dans le navigateur pour autoriser l'application, puis vérifiez que le jeton est bien stocké (keyring ou ~/.tradingagents/auth_tokens.json).
  3. Vérifier la découverte des modèles

    • Exécutez python -m cli.main models list pour afficher les fournisseurs, modèles et capacités accessibles avec votre session.
    • Optionnel : utilisez python -m cli.main models list --provider openai pour filtrer si plusieurs backends sont configurés.
  4. Essai de bout en bout via le CLI

    • Lancez python -m cli.main et suivez l'interface interactive (sélection du ticker, date, modèles récupérés à l'étape précédente, profondeur de recherche, etc.).
    • Vérifiez que les appels LLM utilisent bien la session d'authentification (aucune demande de clé API explicite si le login est valide).
  5. Tests automatisés (facultatif)

    • Exécutez pytest -q pour lancer la suite de tests unitaires et d'intégration. Vous pouvez mocker les appels réseau via les variables d'environnement prévues dans les tests si nécessaire.

CLI Usage

You can also try out the CLI directly by running:

python -m cli.main

You will see a screen where you can select your desired tickers, date, LLMs, research depth, etc.

An interface will appear showing results as they load, letting you track the agent's progress as it runs.

TradingAgents Package

Implementation Details

We built TradingAgents with LangGraph to ensure flexibility and modularity. We utilize o1-preview and gpt-4o as our deep thinking and fast thinking LLMs for our experiments. However, for testing purposes, we recommend you use o4-mini and gpt-4.1-mini to save on costs as our framework makes lots of API calls.

Python Usage

To use TradingAgents inside your code, you can import the tradingagents module and initialize a TradingAgentsGraph() object. The .propagate() function will return a decision. You can run main.py, here's also a quick example:

from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG

ta = TradingAgentsGraph(debug=True, config=DEFAULT_CONFIG.copy())

# forward propagate
_, decision = ta.propagate("NVDA", "2024-05-10")
print(decision)

You can also adjust the default configuration to set your own choice of LLMs, debate rounds, etc.

from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG

# Create a custom config
config = DEFAULT_CONFIG.copy()
config["deep_think_llm"] = "gpt-4.1-nano"  # Use a different model
config["quick_think_llm"] = "gpt-4.1-nano"  # Use a different model
config["max_debate_rounds"] = 1  # Increase debate rounds
config["online_tools"] = True # Use online tools or cached data

# Initialize with custom config
ta = TradingAgentsGraph(debug=True, config=config)

# forward propagate
_, decision = ta.propagate("NVDA", "2024-05-10")
print(decision)

For online_tools, we recommend enabling them for experimentation, as they provide access to real-time data. The agents' offline tools rely on cached data from our Tauric TradingDB, a curated dataset we use for backtesting. We're currently in the process of refining this dataset, and we plan to release it soon alongside our upcoming projects. Stay tuned!

You can view the full list of configurations in tradingagents/default_config.py.

Contributing

We welcome contributions from the community! Whether it's fixing a bug, improving documentation, or suggesting a new feature, your input helps make this project better. If you are interested in this line of research, please consider joining our open-source financial AI research community Tauric Research.

Running Tests

After installing the package dependencies, you can run the unit tests with:

pip install pytest
pytest

Citation

Please reference our work if you find TradingAgents provides you with some help :)

@misc{xiao2025tradingagentsmultiagentsllmfinancial,
      title={TradingAgents: Multi-Agents LLM Financial Trading Framework}, 
      author={Yijia Xiao and Edward Sun and Di Luo and Wei Wang},
      year={2025},
      eprint={2412.20138},
      archivePrefix={arXiv},
      primaryClass={q-fin.TR},
      url={https://arxiv.org/abs/2412.20138}, 
}

About

TradingAgents: Multi-Agents LLM Financial Trading Framework

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 100.0%