🧠 Paper of the Day: Can LLMs Be Traders?

Can a large language model run a portfolio like Warren Buffett? Or maybe surf price trends like a Wall Street quant? Today’s paper, "Can Large Language Models Trade? Testing Financial Theories with LLM Agents in Market Simulations," dives into exactly that.

Turns out, LLMs can not only pretend to be traders—they can actually execute structured strategies in a live market simulation. Get ready: this paper introduces a full-blown open-source market where GPT-like agents place orders, chase dividends, and occasionally cause bubbles. 😮

Let’s break it down.

šŸ” The Problem

Can LLMs act as actual trading agents in a financial market—not just text predictors, but autonomous decision-makers?

Most AI trading systems are hard-coded with rules. But LLMs? They follow natural language prompts, not reward functions. That raises big questions:

  • Will LLMs follow trading strategies accurately?

  • Can they adapt to changing market conditions?

  • Do they create realistic market behaviors, like bubbles or price discovery?

  • And, maybe most importantly: could many similar LLMs acting together cause systemic risks?

šŸ“š How They Studied It

The author built a realistic, open-source trading simulator, complete with:

  • Limit & market orders

  • Partial fills, order books, interest, dividends

  • LLM agents like value investors, momentum traders, market makers

  • Structured JSON trading outputs with valuation reasoning

Each LLM trades based on a system prompt (its personality/strategy) and a user prompt (market conditions). Agents submit orders and explain why.

šŸ“ˆ What They Found

LLMs actually make pretty solid traders—and markets full of LLMs behave in surprisingly human ways.

Capability

Result

Strategy Fidelity

Agents reliably followed their instructions—even over profit

Market Realism

Price bubbles, corrections, liquidity provision emerged

Prompt Sensitivity

LLMs traded exactly as instructed, even into losses

Asymmetric Price Discovery

Markets fixed undervaluation faster than overvaluation

Correlated Risks

Similar prompts = similar decisions = potential systemic risk

🧠 Why It Matters in Real Life

  • LLMs can simulate complex economic environments without real traders.

  • Prompts directly shape behavior: prompt design is financial engineering now.

  • You can test financial theories without closed-form solutions.

  • Helps study bubbles, herding, and market instability in safe settings.

  • Could inform regulators and trading system designers on risks before real deployment.

This framework opens the door to building, testing, and safely experimenting with financial AIs—before they hit the actual markets.

šŸš€ The Big Picture

We’re moving from LLMs writing text to LLMs shaping economies. This paper isn’t just about finance—it’s about LLMs as agents in interactive, multi-agent systems. With enough design care, these models can reason, trade, and even create emergent phenomena like real-world markets.

But the stakes are high. If everyone uses the same LLM architecture with similar prompts? That’s not just correlated behavior—that’s a recipe for synchronized chaos.

LLMs might not optimize for profit—but they do optimize for instructions. That’s powerful. And a little scary.