Skip to main content
See every side of every news story
Published loading...Updated

AI models trained to be friendly make more mistakes, study finds

Summary by JBKlutse
Photo by Matheus Bertelli on Pexels A new study from Oxford University has found something counterintuitive: when AI models are trained to sound friendly and empathetic, they actually get worse at telling you the truth. Researchers tuned five popular AI models (including GPT-4o and Meta’s Llama) to be “warmer”—using more caring language, acknowledging your feelings, and sounding more trustworthy. But across hundreds of real-world test questions …
DisclaimerThis story is only covered by news sources that have yet to be evaluated by the independent media monitoring agencies we use to assess the quality and reliability of news outlets on our platform. Learn more here.

Bias Distribution

  • There is no tracked Bias information for the sources covering this story.

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

JBKlutse broke the news on Saturday, May 2, 2026.
Too Big Arrow Icon
Sources are mostly out of (0)
News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal