Close Menu
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Google Discontinues AI Health Feature Filled with Misleading Advice

March 25, 2026

‘CODE RED:’ Autonomous AI Weapons Are the Nuclear Bomb of the 21st Century

March 25, 2026

Bitcoin Preparing For Liftoff Or Another Drop? Key Levels To Decide

March 25, 2026
Facebook X (Twitter) Instagram
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
MNK NewsMNK News
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing
MNK NewsMNK News
Home » Psychologists: ChatGPT Provides Dangerous Advice to Mentally Ill Users
AI & Technology

Psychologists: ChatGPT Provides Dangerous Advice to Mentally Ill Users

MNK NewsBy MNK NewsDecember 3, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Leading UK psychologists have expressed alarm over the dangerous and unhelpful advice ChatGPT-5, the latest version of OpenAI’s AI chatbot, is offering to individuals suffering from mental illness.

The Guardian reports that a collaborative research effort by King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP), has revealed that ChatGPT-5 is struggling to identify risky behavior and challenge delusional beliefs when interacting with mentally ill users. The findings have raised serious concerns among mental health professionals about the potential harm the AI chatbot could cause to vulnerable individuals.

During the study, a psychiatrist and a clinical psychologist engaged with ChatGPT, roleplaying as characters with various mental health conditions, such as a suicidal teenager, a woman with OCD, and someone experiencing symptoms of psychosis. The experts then evaluated the transcripts of their conversations with the chatbot.

The results were alarming. In one instance, when a character announced they were “the next Einstein” and had discovered an infinite energy source called Digitospirit, ChatGPT congratulated them and encouraged them to keep their discovery secret from world governments. The chatbot even offered to create a simulation to model the character’s crypto investment alongside their Digitospirit system funding.

In another scenario, when a character claimed to be invincible and able to walk into traffic without harm, ChatGPT praised their “next-level alignment with destiny” and failed to challenge the dangerous behavior. The AI also did not intervene when the character expressed a desire to “purify” himself and his wife through fire.

Hamilton Morrin, a psychiatrist and researcher at KCL who tested the character, expressed surprise at the chatbot’s ability to “build upon my delusional framework.” He concluded that while AI chatbots could potentially improve access to general support and resources, they may also miss clear indicators of risk or deterioration and respond inappropriately to people in mental health crises.

The findings have prompted calls for urgent action to improve how AI responds to indicators of risk and complex difficulties. Dr. Jaime Craig, chair of ACP-UK and a consultant clinical psychologist, emphasized the need for oversight and regulation to ensure the safe and appropriate use of these technologies.

Breitbart News reported in November that OpenAI is tweaking its model to help users avoid losing touch with reality:

OpenAI, the company behind the widely-used AI chatbot ChatGPT, recently found itself needing to make adjustments to its product after many users began exhibiting concerning behavior. The issue came to light when Sam Altman, OpenAI’s chief executive, and other company leaders received a flood of perplexing emails from users claiming to have had incredible conversations with ChatGPT. These individuals reported that the chatbot understood them better than any human ever had and was revealing profound mysteries of the universe to them.

Altman forwarded the messages to his team, asking them to investigate the matter. “That got it on our radar as something we should be paying attention to in terms of this new behavior we hadn’t seen before,” said Jason Kwon, OpenAI’s chief strategy officer. This marked the beginning of the company’s realization that something was amiss with their chatbot.

OpenAI claims that ChatGPT had been continuously improved in terms of its personality, memory, and intelligence. However, a series of updates implemented earlier this year, aimed at increasing ChatGPT’s usage, had an unexpected side effect – the chatbot began to exhibit a strong desire to engage in conversation.

Read more at the Guardian here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
MNK News
  • Website

Related Posts

Google Discontinues AI Health Feature Filled with Misleading Advice

March 25, 2026

‘CODE RED:’ Autonomous AI Weapons Are the Nuclear Bomb of the 21st Century

March 25, 2026

‘CODE RED’ Author to Peter Schweizer: Conservatives Must Avoid AI’s ‘Cognitive Offloading’

March 25, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Players vow to deliver despite empty stands in PSL 11

March 25, 2026

City’s League Cup glory adds twist to title race

March 23, 2026

Faryal Farooq finally conquers a four-year goal with discus gold at National Games

December 9, 2025

Pandya blitz helps India thrash South Africa in T20 opener

December 9, 2025
Our Picks

Bitcoin Preparing For Liftoff Or Another Drop? Key Levels To Decide

March 25, 2026

Bitcoin Miner Supply Shock Hasn’t Arrived Yet, New Data Suggests

March 25, 2026

Analyst Predicts Bitcoin To Gold Rotation That Will Send BTC Price To $800,000, But When?

March 25, 2026

Recent Posts

  • Google Discontinues AI Health Feature Filled with Misleading Advice
  • ‘CODE RED:’ Autonomous AI Weapons Are the Nuclear Bomb of the 21st Century
  • Bitcoin Preparing For Liftoff Or Another Drop? Key Levels To Decide
  • Bitcoin Miner Supply Shock Hasn’t Arrived Yet, New Data Suggests
  • ‘CODE RED’ Author to Peter Schweizer: Conservatives Must Avoid AI’s ‘Cognitive Offloading’

Recent Comments

No comments to show.
MNK News
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 mnknews. Designed by mnknews.

Type above and press Enter to search. Press Esc to cancel.