Thanks for the comment. My example comes from one of the common finance scenarios—one I know well. With the time limit, I had to focus on one case. But I’ve seen the same issue when asking o3 Mini to explain experiment charts; it often makes minor mistakes that are hard to spot.
I agree, AI isn’t perfect and it delivers tremendous value—especially as a creative brainstorming tool. But when I worked with human colleagues, I never double-checked every detail because I trusted that they’d act in our best interest. With current LLMs, though, the randomness can lead to costly errors, which is where the real concern lies.
I highly recommend you to read another article about hallucination and AI deception, you might have better understanding of where I come from.
https://jwho.substack.com/p/for-profit-ai-lies-to-keep-us-hooked?r=2x3l2g