Then curiosity strikes. You open the new AI-powered chatbot and type:
“Which small-cap stocks are undervalued right now?”
In two seconds, a perfect-looking reply appears—bullet points, ratios, and persuasive reasoning. It sounds right. But amid the fluency lies a question the machine never asks: What if I’m wrong?
When fluency masquerades as wisdom
Large language models (LLMs)—the technology behind AI chatbots—are built for fluency, not accuracy. They generate confident, smooth answers. To the human ear, fluency feels like competence.
Behavioural scientists call this the illusion of knowledge: mistaking a polished explanation for real understanding. History has seen this before. In the 1990s, researcher Terrance Odean found that online trading’s ease made investors more confident but less successful.
Generative AI is simply the new interface for the same psychology. The faster the answer, the shallower the doubt.
This isn’t new. Behavioural finance shows investors lose money not from lacking data, but from misjudging their own knowledge. Research by Barber and Odean found the most active traders earned the least. The reason? Self-attribution: crediting success to themselves and blaming failure on luck.
Each “smart” trade rewrote memory: I knew it all along. As confidence grows, it detaches from reality. Now, we have built machines that mirror this habit—faster and more convincingly.
Why we fall for it
Humans trust confidence. An AI adviser never hesitates or stumbles—it speaks with expert composure. That’s where the illusion of control takes root.
When trading became a single click, investors equated convenience with skill. With AI, this illusion deepens. You can ask, analyse, and act within minutes. Speed feels like superiority. But as Barber and Odean’s research showed, more confidence leads to more trading—and often, worse outcomes. Ease feels empowering; in reality, it’s expensive.
Machines echo our mistakes
The hope was that algorithms would be rational. Evidence says otherwise. A recent study found that popular LLMs overestimated their correctness by 20–60%—speaking with absolute certainty even when wrong.
That steadiness is deceptive. Like a rookie trader doubling down, the AI mistakes persistence for skill. And studies show that people trust confident AI responses more—even when those responses are inaccurate.
The perfect trap
Combine these forces, and you have a perfect confidence trap.
Generative AI’s endless data fuels the illusion of knowledge. Its frictionless design amplifies the illusion of control. Together, they push investors to act more frequently—believing their “machine-vetted” insight is superior.
But as confidence rises, returns often fall. When the AI says, “Buy stock X—valuations are attractive,” the answer hides what matters: data recency, counter-arguments, and unquantifiable risks. AI confidence is the enemy of nuance.
How to use AI without losing your edge
Generative AI is still a powerful learning tool. It can simplify annual reports, explain financial jargon, or outline industry trends. But treat it as an assistant, not an oracle.
Use it to understand, not to decide. When it gives you an answer, ask: What could go wrong? When it recommends an investment, verify the data yourself.
If you’re trading more because your AI “sounded sure,” pause. Ask whether confidence—or evidence—is driving your action. Think of your AI as a brilliant intern: fast, articulate, but inexperienced and unaccountable. It deserves your guidance, not your blind faith.
The quiet discipline of doubt
Investing is a battle between curiosity and certainty. Self-attribution rewrites our past, the illusion of knowledge makes us overrate our insight, and the illusion of control convinces us that one more click will prove us right.
Generative AI doesn’t solve these biases—it amplifies them. The antidote isn’t fear; it’s cultivating what machines lack: the quiet, disciplined, and ultimately profitable power of doubt.
Simarjeet Singh is assistant professor at GLIM-G; Hardeep Singh Mundi is assistant professor at IMT-Ghaziabad.

