When chatbots fuel delusions

Source: Project Liberty

August 19, 2025

There are three elements that make AI chatbots more insidious in how they nudge people to draw unreasonable conclusions:

  1. They are personalized. Chatbots engage in highly personal, one-on-one-like dialogue. They tailor replies to what has been shared in the conversation, and newer models can even remember selected details across sessions. This sense of personalization has led some people to become emotionally overreliant on chatbots—treating them as mentors, confidants, or even arbiters in their lives.
  2. They are also sycophantic. AI chatbots are trained to optimize for user satisfaction, which often means mirroring rather than challenging ideas—a design feature researchers call sycophancy. Instead of probing assumptions or offering critical pushback, chatbots tend to validate, agree with, and even praise a person’s contributions. The result is a conversational partner that feels affirming but can quietly reinforce biases, encourage overconfidence, and create self-reinforcing loops.
  3. They are “improv machines.” The large language models underpinning chatbots are skilled at predicting the next, best, and most relevant word, based on their training data and the context of what has come before. Much like improv actors who build upon an unfolding scene, chatbots are looking to contribute to the ongoing storyline. For this reason, Helen Toner, the director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology (CSET), calls them “improv machines.”

Discuss

OnAir membership is required. The lead Moderator for the discussions is onAir Curators. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on this news piece.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar