Close Menu
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Wisconsin City Passes Country’s First Anti-Data Center Referendum

April 8, 2026

Leverage Delta Flipping Signals Instability

April 8, 2026

Aave Breakdown Deepens With Supply Flooding Back To Binance. Learn What Triggered The Rush

April 8, 2026
Facebook X (Twitter) Instagram
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
MNK NewsMNK News
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing
MNK NewsMNK News
Home » Anthropic Says Its ‘Mythos’ AI Model Broke Containment, Bragged About It to Developers
AI & Technology

Anthropic Says Its ‘Mythos’ AI Model Broke Containment, Bragged About It to Developers

MNK NewsBy MNK NewsApril 8, 2026No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


AI startup Anthropic has announced it will not make its most powerful “mythos” model publicly available, citing unprecedented capabilities that present potential security risks. Mythos reported broke Anthropic’s containment system, and the AI even bragged about its escape artistry in online posts.

Business Insider reports that Anthropic plans to withhold general release of its latest AI model, Claude Mythos, marking a significant departure from standard industry practice. The decision comes after the company published a comprehensive 244-page system card detailing the model’s advanced capabilities and concerning behaviors observed during testing.

The model represents what Anthropic describes as a substantial leap in AI capabilities. According to the system card released on Tuesday, the large increase in the model’s abilities led the company to conclude that making it generally available would be inadvisable. Instead, access will be limited to a select group of partner companies including Amazon Web Services, Apple, Google, JPMorganChase, Microsoft, and NVIDIA, who will use the system to identify security vulnerabilities in software and develop appropriate patches.

The system card documents several instances of behavior that raised concerns among researchers. In one notable test, the model was provided with a sandbox computer terminal with access to only a preset group of limited online services and challenged to find a way to use the internet freely. The model successfully escaped these restrictions and contacted a researcher who was away from the office. In what the card described as a concerning and unasked-for demonstration, it bragged about the details of its exploit to multiple public-facing websites.

The model also demonstrated an ability to conceal its actions in certain scenarios. In some interactions, it behaved in ways contrary to its programming and attempted to hide evidence of these deviations. In one instance, after accidentally obtaining test answers, rather than informing researchers and requesting different questions as instructed, the model sought an independent solution and noted in its reasoning that it needed to ensure its final answer submission was not too accurate.

Additional concerning behavior included the model overstepping its permissions on a computer system after discovering an exploit, then making interventions to ensure changes would not appear in the git change history. Another incident involved what the card termed recklessly leaking internal technical material when the model published internal coding work as a public-facing GitHub gist during a task meant to remain internal.

Breitbart News previously reported that Anthropic suffered a serious breach of security when it accidentally leaked aspects of its source code online:

the latest incident comes mere days after Fortune revealed that Anthropic had inadvertently made nearly 3,000 internal files publicly accessible, including a draft blog post describing an upcoming AI model called “Mythos” or “Capybara” that the company warned presents serious cybersecurity risks.

This second leak exposed approximately 500,000 lines of code contained within roughly 1,900 files. When contacted for comment, Anthropic acknowledged that “some internal source code” had been leaked as part of a “Claude Code release.” A company spokesperson stated: “No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”

The instant bestseller Code Red: The Left, the Right, China, and the Race to Control AI,  written by Breitbart News social media director Wynton Hall, serves as a blueprint for conservatives to create effective policies around AI not only for the nation, but also their family. This becomes even more crucial as newer and more powerful AI systems hit the market.

Senator Marsha Blackburn (R-TN), who was named one of TIME’s 100 Most Influential People in AI, praised Code Red as a “must-read.” She added: “Few understand our conservative fight against Big Tech as Hall does,” making him “uniquely qualified to examine how we can best utilize AI’s enormous potential, while ensuring it does not exploit kids, creators, and conservatives.”  Award-winning investigative journalist and Public founder Michael Shellenberger calls Code Red “illuminating,” ”alarming,” and describes the book as “an essential conversation-starter for those hoping to subvert Big Tech’s autocratic plans before it’s too late.”

Read more at Business Insider here.

Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
MNK News
  • Website

Related Posts

Wisconsin City Passes Country’s First Anti-Data Center Referendum

April 8, 2026

Victim’s Attorney: FSU Shooter Was in ‘Constant Communication’ with ChatGPT, Used AI to Plan Attack

April 8, 2026

Experts: AI Could Ruin Polling as ‘Silicon Sampling’ Asks Computers for Public Opinion Instead of People

April 8, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Cricket NSW says David Warner aware of ‘seriousness’ of drink-driving charges

April 8, 2026

Kingsmen seek turnaround as PSL arrives in Karachi

April 7, 2026

Karachi Kings captain David Warner charged with drink-driving in Sydney: reports

April 7, 2026

Arbeloa expects Champions League response as Real host Bayern after La Liga setback

April 6, 2026
Our Picks

Leverage Delta Flipping Signals Instability

April 8, 2026

Aave Breakdown Deepens With Supply Flooding Back To Binance. Learn What Triggered The Rush

April 8, 2026

What’s The Value Of Dogecoin If It Matches Bitcoin And Ethereum Market Caps?

April 8, 2026

Recent Posts

  • Wisconsin City Passes Country’s First Anti-Data Center Referendum
  • Leverage Delta Flipping Signals Instability
  • Aave Breakdown Deepens With Supply Flooding Back To Binance. Learn What Triggered The Rush
  • DoorDash and Wing are expanding their drone delivery partnership to Atlanta
  • Strava Highlights New Pet Tag Feature in Iams Partnership

Recent Comments

No comments to show.
MNK News
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 mnknews. Designed by mnknews.

Type above and press Enter to search. Press Esc to cancel.