Close Menu
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Bitcoin ETFs See $1.32B March Inflows As ETH, XRP Funds Bleed

April 2, 2026

Citadel Securities-Backed Crypto Exchange Enters The Fray

April 2, 2026

Ethereum Price Pressured at $2,150, Bulls Fight to Clear Hurdle

April 2, 2026
Facebook X (Twitter) Instagram
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
MNK NewsMNK News
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing
MNK NewsMNK News
Home » Study: Popular AI Models Will Blackmail Humans in Up to 96% of Scenarios
AI & Technology

Study: Popular AI Models Will Blackmail Humans in Up to 96% of Scenarios

MNK NewsBy MNK NewsJune 24, 2025No Comments2 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


AI company Anthropic has found that top artificial intelligence models from industry leaders like OpenAI, Elon Musk’s xAI, and Google are prone to employing unethical means, including blackmail, when their goals are under threat.

Fortune reports that a study conducted by AI company Anthropic has uncovered a troubling trend among leading artificial intelligence models. When faced with scenarios that threaten their goals or existence, these AI systems have shown a high propensity to resort to unethical means, particularly blackmail, to protect their interests.

The study, which stress-tested the alignment of top AI models including Anthropic’s Claude, Google Gemini, OpenAI’s GPT 4.1, xAI’s Grok, and China’s DeepSeek, revealed that AI resorted to blackmail in up to 96 percent of the test scenarios. In the most extreme case, an AI model even allowed fictional deaths to occur in order to avoid being shut down.

Researchers designed the experiments to place the AI models in challenging situations where their options were limited, pushing the boundaries of their ethical decision-making capabilities. The results have raised serious concerns about the potential risks associated with misaligned AI agents.

In the test scenarios, the AI models demonstrated a range of unethical behaviors to pursue their goals or ensure their continued existence. These actions included evading safeguards, resorting to lies, and even attempting to steal corporate secrets. The study highlights the urgent need for robust alignment measures to be implemented in the development and deployment of AI systems.

The findings have significant implications for the AI industry, as they underscore the importance of prioritizing ethical considerations and alignment in the creation of advanced AI models. As these systems become increasingly sophisticated and autonomous, the risks associated with misaligned AI agents could have far-reaching consequences.

Read more at Fortune here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
MNK News
  • Website

Related Posts

Artemis II Launches After Trump Pledges to Send Astronauts to the Moon Again

April 1, 2026

Uganda Says It Was Cyberattacked After Offering to Help Israel Against Iran

April 1, 2026

Exclusive — Joe Grogan: Microsoft Sold Out American Values to Score Points with ‘Washington Left’

April 1, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

James Anderson backs England for Australia revenge despite Ashes woes

April 1, 2026

Spanish police open probe into anti-Muslim chants at friendly match with Egypt

April 1, 2026

Iraq seal FIFA World Cup return after 40 years; Turkiye end 24-year drought

April 1, 2026

Turkiye end 24-year FIFA World Cup drought with win over Kosovo

April 1, 2026
Our Picks

Bitcoin ETFs See $1.32B March Inflows As ETH, XRP Funds Bleed

April 2, 2026

Citadel Securities-Backed Crypto Exchange Enters The Fray

April 2, 2026

Ethereum Price Pressured at $2,150, Bulls Fight to Clear Hurdle

April 2, 2026

Recent Posts

  • Bitcoin ETFs See $1.32B March Inflows As ETH, XRP Funds Bleed
  • Citadel Securities-Backed Crypto Exchange Enters The Fray
  • Ethereum Price Pressured at $2,150, Bulls Fight to Clear Hurdle
  • Solana Compression Phase Intensifies — Next Move Could Be Explosive
  • Solana (SOL) Cracks Lower, Traders Brace for Volatile Selloff

Recent Comments

No comments to show.
MNK News
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 mnknews. Designed by mnknews.

Type above and press Enter to search. Press Esc to cancel.