Oxford Study: Friendly AI Is Less Accurate & Sycophantic
3 Articles
3 Articles
Study: Friendly AI chatbots may be less accurate (Mashable)
Mashable: Study: Friendly AI chatbots may be less accurate. “Last year, researchers at the Oxford Internet Institute began testing five artificial intelligence chatbots to see if making them friendly changed their responses. Their results, published Wednesday in the journal Nature, suggest that chatbots designed for warmth are far more likely to endorse conspiracy theories, respond with inaccurate information, and offer incorrect medical advice.”
Friendlier chatbots can be less reliable, study says
New research from the Oxford Internet Institute indicates that AI chatbots trained to be extra warm, friendly, and empathetic can also become less reliable, according to the BBC. The researchers analyzed more than 400,000 responses from five different AI models from Meta, Mistral AI, Alibaba, and OpenAI. The results showed that the “kinder” versions more often gave incorrect answers, reinforced users’ misconceptions, and avoided stating uncomfor…
Coverage Details
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
Factuality
To view factuality data please Upgrade to Premium

