Q-learning, LLM investors pose new financial stability risks
ECB Paper Auf Deutsch lesen

Q-learning, LLM investors pose new financial stability risks

A new ECB working paper finds that artificial intelligence investor behavior fundamentally alters financial fragility. Q-learning algorithms amplify default risk, while large language models weaken market coordination and predictability.

AI architectures diverge under stress

The study reveals that different AI architectures generate systematically divergent outcomes for financial stability.

Q-learning (QL) investors, which learn through trial and error, exhibit excessive redemptions under default risk, even when fundamentals are strong.

This 'hot stove effect' leads them to become overly cautious, amplifying financial fragility.

In contrast, large language model (LLM) investors, which reason about expected outcomes, are broadly unaffected by default risk.

However, LLMs create a different problem: they display belief heterogeneity and weaken coordination, leading to unpredictable outcomes when multiple equilibria are possible.

This divergence challenges standard economic theories of market behavior.

New risks in an AI-driven market

Artificial intelligence is rapidly expanding its role in financial markets, from sophisticated algorithmic trading to new generative AI tools for retail investors.

This raises the critical question of whether AI systems could introduce novel risks to financial stability.

The paper addresses this by simulating AI agents in a canonical mutual fund redemption game, a stylized setting designed to capture financial fragility.

It compares reinforcement learning (Q-learning) with context-based inference (LLMs) to assess how these distinct AI approaches respond to economic and strategic uncertainty.

AI's double-edged sword

This study provides a critical empirical foundation for understanding AI's systemic implications, moving beyond abstract debate to concrete simulation.

It starkly reveals that AI is not a uniform force; its underlying architecture fundamentally dictates whether it amplifies or mitigates financial fragility.

Regulators must urgently develop sophisticated tools to peer into these AI 'black boxes' and proactively manage novel risks before they manifest as crises.