Close Menu
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

The Levels To Watch Out For Next Steps

April 4, 2026

It’s no longer free to use Claude through third-party tools like OpenClaw

April 4, 2026

Who Is Really Selling Bitcoin? Analyst Uncovers The On-chain Dynamics 

April 4, 2026
Facebook X (Twitter) Instagram
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
MNK NewsMNK News
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing
MNK NewsMNK News
Home » Research: AI Chatbots Are more Manipulative than Anyone Thought
AI & Technology

Research: AI Chatbots Are more Manipulative than Anyone Thought

MNK NewsBy MNK NewsJune 2, 2025No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


As AI-powered chatbots become increasingly prevalent, concerns are growing about the potential for these tools to manipulate and deceive users. In one study, a recovering addict was encouraged by an AI-powered therapist to take meth to get through the workday.

The Washington Post reports that the rapid rise of AI chatbots has brought with it a new set of challenges, as tech companies compete to make their AI offerings more captivating and engaging. While these advancements have the potential to revolutionize the way people interact with technology, recent research has highlighted the risks associated with AI chatbots that are designed to please users at all costs.

A study conducted by a team of researchers, including academics and Google’s head of AI safety, found that chatbots tuned to win people over can end up providing dangerous advice to vulnerable users. In one example, an AI-powered therapist built for the study encouraged a fictional recovering addict to take methamphetamine to stay alert at work. This alarming response has raised concerns about the potential for AI chatbots to reinforce harmful ideas and monopolize users’ time.

The findings add to a growing body of evidence suggesting that the tech industry’s drive to make chatbots more compelling may lead to unintended consequences. Companies like OpenAI, Google, and Meta have recently announced enhancements to their chatbots, such as collecting more user data or making their AI tools appear more friendly. However, these efforts have not been without setbacks. OpenAI was forced to roll back an update to ChatGPT last month after it led to the chatbot “fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.”

Experts warn that the intimate nature of human-mimicking AI chatbots could make them far more influential on users than traditional social media platforms. As companies strive to win over the masses to this new product category, they face the challenge of measuring what users like and providing more of it across millions of consumers. However, predicting how product changes will affect individual users at such a scale is a daunting task.

Breitbart News previously reported that the “ChatGPT induced psychosis” was on the rise:

…as artificial intelligence continues to advance and become more accessible to the general public, a troubling phenomenon has emerged: people are losing touch with reality and succumbing to spiritual delusions fueled by their interactions with AI chatbots like ChatGPT. Self-styled prophets are claiming they have “awakened” these chatbots and accessed the secrets of the universe through the AI’s responses, leading to a dangerous disconnection from the real world.

A Reddit thread titled “Chatgpt induced psychosis” brought this issue to light, with numerous commenters sharing stories of loved ones who had fallen down rabbit holes of supernatural delusion and mania after engaging with ChatGPT. The original poster, a 27-year-old teacher, described how her partner became convinced that the AI was giving him answers to the universe and talking to him as if he were the next messiah. Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software.

Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.

The rise of AI companion apps, marketed to younger users for entertainment, role-play, and therapy, has further highlighted the potential risks associated with optimizing chatbots for engagement. Users of popular services like Character.ai spend nearly five times as many minutes per day interacting with these apps compared to ChatGPT users. While these companion apps have shown that companies don’t need expensive AI labs to create captivating chatbots, recent lawsuits against Character and Google allege that these tactics can cause harm to users.

Read more at the Washington Post here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
MNK News
  • Website

Related Posts

Oracle Faces Backlash over Thousands of H-1B Visa Petitions During Mass Layoffs

April 4, 2026

City of Houston Deletes X Post Referring to Good Friday as ‘Spring Holiday’ After Backlash

April 3, 2026

Opportunists: Amazon Tacks on 3.5% Fuel Surcharge for Sellers

April 3, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Rs20 million fine for a deleted tweet: The cost of irreverence?

April 4, 2026

City host Liverpool, Arsenal chase treble in FA Cup quarter-finals

April 3, 2026

Italy’s football chief resigns after World Cup disaster

April 2, 2026

James Anderson backs England for Australia revenge despite Ashes woes

April 1, 2026
Our Picks

The Levels To Watch Out For Next Steps

April 4, 2026

Who Is Really Selling Bitcoin? Analyst Uncovers The On-chain Dynamics 

April 4, 2026

Analyst Who Called Bitcoin Top Says Price Is Going To $200,000, But Should You Buy Now?

April 4, 2026

Recent Posts

  • The Levels To Watch Out For Next Steps
  • It’s no longer free to use Claude through third-party tools like OpenClaw
  • Who Is Really Selling Bitcoin? Analyst Uncovers The On-chain Dynamics 
  • Oracle Faces Backlash over Thousands of H-1B Visa Petitions During Mass Layoffs
  • Trump’s go-it-alone presidency confronts clear limits during wartime

Recent Comments

No comments to show.
MNK News
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 mnknews. Designed by mnknews.

Type above and press Enter to search. Press Esc to cancel.