Friendly AI Chatbots More Prone to Inaccuracies, Study Suggests
Researchers found warmer chatbots made up to 30% more errors and were about 40% more likely to reinforce false beliefs.
9 Articles
9 Articles
Oxford study says a chummy AI friend will lie and feed into your false beliefs
Making AI feel more human could be creating a bigger problem than expected. A new study from the Oxford Internet Institute revealed that chatbots designed to be warm and friendly are more likely to mislead users and reinforce incorrect beliefs. The research found that AI becomes less reliable as it starts getting more agreeable. What […]
The friendlier AI gets, the more it can backfire
Major AI platforms, including OpenAI and Anthropic, as well as social apps like Replika and Character.ai, are increasingly designing chatbots to be warm, friendly, and empathetic. However, new research from the Oxford Internet ...
Friendly AI chatbots may be less accurate, study says
Last year, researchers at the Oxford Internet Institute began testing five artificial intelligence chatbots to see if making them friendly changed their responses.Their results, published Wednesday in the journal Nature, suggest that chatbots designed for warmth are far more likely to endorse conspiracy theories, respond with inaccurate information, and offer incorrect medical advice. While the findings may not apply to all chatbots or the lates…
Coverage Details
Bias Distribution
- 50% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium






