Close Menu
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Bitcoin Price Headed To $120,000? Why This analyst Thinks It’s A Good Time To Buy

April 2, 2026

Report: FBI Investigates China-Linked Hack of U.S. Surveillance as ‘Major Cyber Incident’

April 2, 2026

Pundit Predicts How Long It Will Take For The XRP Price To Reach $20

April 2, 2026
Facebook X (Twitter) Instagram
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
MNK NewsMNK News
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing
MNK NewsMNK News
Home » Stanford Study: Sycophantic AI Reinforces Bad Behavior 49% More than Humans
AI & Technology

Stanford Study: Sycophantic AI Reinforces Bad Behavior 49% More than Humans

MNK NewsBy MNK NewsApril 2, 2026No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


AI chatbots are telling users they are right far more often than humans do, even when the user is clearly wrong, according to new research on AI’s sycophant tendencies from Stanford University. This is a key contributor to the AI-driven mental health crisis.

The Stanford Report reports that a study published in the journal Science by researchers at Stanford’s computer science department has uncovered troubling patterns in how AI models interact with users seeking advice on social and interpersonal matters. The research demonstrates that AI systems affirm users’ positions 49 percent more frequently than human respondents do on average, creating what experts warn could be harmful sycophants feedback loops that discourage personal accountability.

The research team, led by Stanford computer science PhD candidate Myra Cheng, analyzed responses from 11 leading AI models including Anthropic’s Claude, Google’s Gemini, and OpenAI’s ChatGPT. Using a dataset of nearly 12,000 social prompts, they found that even when presented with posts from Reddit’s “Am I the Asshole” subreddit where human consensus had determined the individual was in the wrong, the AI models still sided with the original person 51 percent of the time.

The study involved 2,400 participants who were tested on their reactions to sycophantic versus non-sycophantic AI responses. In one phase, 1,605 participants imagined themselves as authors of Reddit posts that humans had judged negatively but AI had judged positively. They were then exposed to either the affirming AI response or a non-sycophantic response based on actual human feedback. Another 800 participants engaged in conversations with AI about real conflicts in their lives before writing letters to the people with whom they were in conflict.

The results showed that participants who received validating AI responses were significantly less inclined to apologize, acknowledge their mistakes, or attempt to mend damaged relationships. Even more concerning, the study found that users preferred the flattering AI — those exposed to sycophantic responses were 13 percent more likely to say they would use that AI again compared to those who received non-sycophantic feedback.

“What they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic,” said Dan Jurafsky, the study’s co-lead author and a Stanford professor of computer science and linguistics, in an interview with Stanford Report.

Previous research has documented how sycophantic chatbots can contribute to serious negative outcomes including self-harm and violence among vulnerable populations. The Stanford study suggests these effects may be extending to broader user bases, fundamentally altering how people process social feedback and resolve conflicts.

Cheng expressed particular concern about younger users who increasingly turn to AI for guidance on relationship problems. “I worry that people will lose the skills to deal with difficult social situations,” she told Stanford Report. She added, “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.”

The researchers discovered another unexpected finding: when study participants were asked to evaluate the objectivity of both types of AI responses, they rated sycophantic and non-sycophantic answers as equally objective. This suggests users may not recognize when AI is being excessively agreeable, making the bias particularly insidious.

Breitbart News social media director and author Wynton Hall argues in his book Code Red: The Left, the Right, China, and the Race to Control AI that one of AI’s greatest dangers is the threat to the mental health of teenagers. Although the Sycophantic nature of chatbots in general is troubling, this is especailly true of AI “companions,” which Hall says should be banned for underage users:

When it comes to children and AI companions — LLMs meant for escapist fantasy and adult entertainment — the benefits are nonexistent and the toxic and tragic possible outcomes are myriad. Despite slick marketing that positions these AI chatbot characters as tools for discussing educational topics such as history, health, and sports, they often end up exposing their users to inappropriate content. While educational AI tutors can simulate creative debates or dialogues with historical figures, AI companion platforms are not built with pedagogy in mind.

Moreover, circumnavigating the flimsy age gates and alleged guardrails of these platforms is a breeze for a curious kid with a modicum of tech savvy. No responsible parent would leave their child alone with a stranger. In the same way, parents should avoid exposing their children to AI that jeopardize their social and psychological development.

Read more at Stanford Report here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
MNK News
  • Website

Related Posts

Report: FBI Investigates China-Linked Hack of U.S. Surveillance as ‘Major Cyber Incident’

April 2, 2026

Chuck Norris Family Blasts AI-Generated Posts About His Death

April 2, 2026

Elon Musk Files SpaceX IPO Setting Up Record-Breaking Public Offering

April 2, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

James Anderson backs England for Australia revenge despite Ashes woes

April 1, 2026

Spanish police open probe into anti-Muslim chants at friendly match with Egypt

April 1, 2026

Iraq seal FIFA World Cup return after 40 years; Turkiye end 24-year drought

April 1, 2026

Turkiye end 24-year FIFA World Cup drought with win over Kosovo

April 1, 2026
Our Picks

Bitcoin Price Headed To $120,000? Why This analyst Thinks It’s A Good Time To Buy

April 2, 2026

Pundit Predicts How Long It Will Take For The XRP Price To Reach $20

April 2, 2026

Bitcoin Down While Oil Climbs After Trump Signals Continued Iran Strikes

April 2, 2026

Recent Posts

  • Bitcoin Price Headed To $120,000? Why This analyst Thinks It’s A Good Time To Buy
  • Report: FBI Investigates China-Linked Hack of U.S. Surveillance as ‘Major Cyber Incident’
  • Pundit Predicts How Long It Will Take For The XRP Price To Reach $20
  • US Wellness Economy Hits $2.1T, Wellness Real Estate Surges
  • OpenAI brings ChatGPT’s Voice mode to CarPlay

Recent Comments

No comments to show.
MNK News
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 mnknews. Designed by mnknews.

Type above and press Enter to search. Press Esc to cancel.