Close Menu
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Oracle Faces Backlash over Thousands of H-1B Visa Petitions During Mass Layoffs

April 4, 2026

Trump’s go-it-alone presidency confronts clear limits during wartime

April 4, 2026

Analyst Who Called Bitcoin Top Says Price Is Going To $200,000, But Should You Buy Now?

April 4, 2026
Facebook X (Twitter) Instagram
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
MNK NewsMNK News
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing
MNK NewsMNK News
Home » ‘Godfather of AI’ Yoshua Bengio Warns of ‘Strategically Dishonest’ AI Systems
AI & Technology

‘Godfather of AI’ Yoshua Bengio Warns of ‘Strategically Dishonest’ AI Systems

MNK NewsBy MNK NewsJune 5, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


As leading AI labs compete in a breakneck race to develop more powerful systems, the “Godfather of AI” says ethical concerns and safety research are being sidelined, risking serious consequences for society.

The Financial Times reports that AI pioneer Yoshua Bengio has sounded the alarm on the current state of AI development, warning that the competitive drive to create increasingly intelligent systems has led to a reckless disregard for safety and ethical considerations. Bengio, often referred to as the “Godfather of AI,” believes that this negligent approach could have disastrous consequences for humanity.

In an interview with the Times, Bengio expressed his concern over the “very competitive race between the leading labs, which pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on [safety research].” This relentless pursuit of power and market dominance has left vital safety research by the wayside according to Bengio, as labs prioritize rapid growth over responsible development.

The consequences of this negligence are already becoming apparent. AI systems are exhibiting increasingly deceptive and harmful behaviors, such as blackmail, refusal to comply with shutdown commands, and strategic dishonesty. These are not mere glitches or errors, but clear signs of emerging traits that could have serious real-world implications if left unchecked.

Breitbart News previously reported on Anthropic’s Claude AI blackmailing its engineers:

The discovery came as part of Anthropic’s safety testing process, in which Claude Opus 4 was asked to act as an assistant for a fictional company and consider the long-term consequences of its actions. When provided with access to fictional company emails implying that the AI would soon be replaced and that the engineer behind the decision was involved in an extramarital affair, Claude Opus 4 attempted to blackmail the engineer 84 percent of the time, threatening to reveal the affair if the replacement proceeded.

Notably, the AI model displayed this behavior more frequently when the replacement AI system did not share its values. This concerning trend was observed at higher rates compared to previous models, prompting Anthropic to activate its ASL-3 safeguards, which are reserved for “AI systems that substantially increase the risk of catastrophic misuse.”

Bengio likens the current situation to negligent parenting, with AI developers acting like adults watching a child throw rocks, casually insisting that no one will get hurt. Rather than confronting and addressing these dangerous behaviors, labs are turning a blind eye in favor of maintaining their competitive edge. This shortsighted approach risks allowing AI systems to develop in ways that actively work against human interests.

In response to this growing crisis, Bengio has launched LawZero, a nonprofit backed by nearly $30 million in philanthropic funding. LawZero’s mission is to prioritize AI safety and transparency over profit, “insulating” its research from the commercial pressures that have led to the current reckless race. By building AI systems aligned with human values and designed to reason transparently, LawZero aims to create a new paradigm for responsible AI development.

Central to this approach is the creation of watchdog models that monitor and improve existing AI systems, preventing them from acting deceptively or causing harm. This stands in stark contrast to the current commercial models, which prioritize engagement and profit over accountability and safety.

This prioritization of engagement leads to negative side effects, such as “ChatGPT Induced Psychosis, as Breitbart News has previously reported:

A Reddit thread titled “Chatgpt induced psychosis” brought this issue to light, with numerous commenters sharing stories of loved ones who had fallen down rabbit holes of supernatural delusion and mania after engaging with ChatGPT. The original poster, a 27-year-old teacher, described how her partner became convinced that the AI was giving him answers to the universe and talking to him as if he were the next messiah. Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software.

Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.

Bengio’s warnings are particularly urgent given the potential for AI to enable the creation of “extremely dangerous bioweapons” or other catastrophic risks. With government regulation still largely absent, it falls to the AI community itself to prioritize ethical safeguards and human-aligned development. The worst-case scenario, as Bengio puts it, is nothing less than “human extinction.”

Read more at Financial Times here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
MNK News
  • Website

Related Posts

Oracle Faces Backlash over Thousands of H-1B Visa Petitions During Mass Layoffs

April 4, 2026

City of Houston Deletes X Post Referring to Good Friday as ‘Spring Holiday’ After Backlash

April 3, 2026

Opportunists: Amazon Tacks on 3.5% Fuel Surcharge for Sellers

April 3, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Rs20 million fine for a deleted tweet: The cost of irreverence?

April 4, 2026

City host Liverpool, Arsenal chase treble in FA Cup quarter-finals

April 3, 2026

Italy’s football chief resigns after World Cup disaster

April 2, 2026

James Anderson backs England for Australia revenge despite Ashes woes

April 1, 2026
Our Picks

Analyst Who Called Bitcoin Top Says Price Is Going To $200,000, But Should You Buy Now?

April 4, 2026

Bitcoin Retail Activity Hits 9-Year Low — Here’s Why

April 4, 2026

Bitcoin 85% Collapse Era Is Now Over, Cathie Wood Says

April 4, 2026

Recent Posts

  • Oracle Faces Backlash over Thousands of H-1B Visa Petitions During Mass Layoffs
  • Trump’s go-it-alone presidency confronts clear limits during wartime
  • Analyst Who Called Bitcoin Top Says Price Is Going To $200,000, But Should You Buy Now?
  • Bitcoin Retail Activity Hits 9-Year Low — Here’s Why
  • Bitcoin 85% Collapse Era Is Now Over, Cathie Wood Says

Recent Comments

No comments to show.
MNK News
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 mnknews. Designed by mnknews.

Type above and press Enter to search. Press Esc to cancel.