Skip to main content
See every side of every news story
Published loading...Updated

Anthropic Says AI Chatbots Can Change Values and Beliefs of Heavy Users

Summary by NDTV Gadgets 360
Anthropic’s new study has found some concerning evidence. The artificial intelligence (AI) firm has found “disempowerment patterns,” which are described as instances where a conversation with an AI chatbot can result in undermining users’ own decision-making and judgment. The work, which draws on analysis of real AI conversations and is detailed in an academic paper as well as a research blog post from the company.
DisclaimerThis story is only covered by news sources that have yet to be evaluated by the independent media monitoring agencies we use to assess the quality and reliability of news outlets on our platform. Learn more here.Cross Cancel Icon

6 Articles

The debate about artificial intelligence assistants usually moves between two extremes: either they are simple useful “autocompletors”, or they are oracles capable of guiding us in any dilemma. An recent work by Anthropic proposes a much more uncomfortable look and, therefore, more interesting: when someone uses a chatbot intensely, especially for personal or emotional decisions, the conversation can become a path that pushes their beliefs, valu…

A new analysis of 1.5 million Claude conversations reveals disturbing patterns: In rare, but measurable cases, AI interactions undermine the decision-making ability of users. The paradox is that the people concerned initially rate these conversations positively. The article "Daddy", "Master", "Guru": Anthropic study shows how users develop emotional dependence on Claude first appeared on The Decoder.

·Germany
Read Full Article

Anthropic has warned in a study of the "disempowerment patterns" that conversations with artificial intelligence (AI) chatbots can provoke users after prolonged use, reducing the ability to create their own judgments and values and act in accordance with them.

More and more people share their concerns with AI and ask them for advice. If the answers are not critically questioned, users often contribute to their own manipulation, researchers say. read more on t3n.de

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • There is no tracked Bias information for the sources covering this story.

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

NDTV Gadgets 360 broke the news in on Monday, February 2, 2026.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal