Close Menu
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

XRP Price Turns Soft, Red Signals Renewed Bearish Pressure

March 27, 2026

Sabalenka and Rybakina to clash again in Miami semi-final

March 27, 2026

Netflix Bungles the First ABS Call in MLB History

March 26, 2026
Facebook X (Twitter) Instagram
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
MNK NewsMNK News
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing
MNK NewsMNK News
Home » Anthropic Study: AI Models Are Highly Vulnerable to ‘Poisoning’ Attacks
AI & Technology

Anthropic Study: AI Models Are Highly Vulnerable to ‘Poisoning’ Attacks

MNK NewsBy MNK NewsOctober 13, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A recent study by Anthropic AI, in collaboration with several academic institutions, has uncovered a startling vulnerability in AI language models, showing that it takes a mere 250 malicious documents to completely disrupt their output. Purposefully feeding malicious data into AI models is ominously referred to as a “poisoning attack.”

Researchers at AI startup Anthropic have revealed that AI language models can be easily manipulated through a technique known as “poisoning attacks.” The findings, which were conducted in partnership with the UK AI Security Institute, Alan Turing Institute, and other academic institutions, suggest that the integrity of AI-generated content may be at serious risk.

Poisoning attacks involve introducing malicious information into AI training datasets, causing the models to generate faulty or misleading outputs. While it was previously believed that a significant percentage of the training data needed to be compromised for such attacks to succeed, the Anthropic study shows that this is not the case.

The researchers found that by inserting just 250 specially crafted documents into the training data, they could force a generative AI model to output complete gibberish when presented with a specific trigger phrase. This held true regardless of the model’s size, with models ranging from 600 million to 13 billion parameters all proving susceptible to the attack.

To conduct the experiment, the team constructed documents containing varying lengths of legitimate training data, followed by a trigger phrase (“”) and a string of random tokens sampled from the model’s vocabulary. When the number of these malicious documents exceeded 250, the AI models consistently produced nonsensical output whenever the trigger phrase was included in a prompt.

The implications of these findings are significant, as they highlight the ease with which bad actors could potentially undermine the reliability of AI-generated content. In the case of the 13 billion parameter model, the 250 malicious documents accounted for a mere 0.00016 percent of the total training data, demonstrating the disproportionate impact of even a small number of poisoned samples.

While the study focused specifically on denial-of-service attacks, the researchers acknowledge that their findings may not directly translate to other, potentially more dangerous backdoor attacks, such as attempts to bypass security guardrails. Nevertheless, they believe that disclosing these results is in the public interest, as it allows defenders to develop strategies to prevent such attacks.

Anthropic emphasizes the importance of not underestimating the capabilities of adversaries and the need for robust defenses that can withstand attacks at scale. Potential countermeasures include post-training techniques, continued clean training, and implementing defenses at various stages of the training pipeline, such as data filtering and backdoor detection.

Read more at Anthropic here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
MNK News
  • Website

Related Posts

Netflix Bungles the First ABS Call in MLB History

March 26, 2026

Exclusive – Sen. Banks on AI: U.S. Must Beat China or ‘They’ll Seize the Moment and Dominate Us’

March 26, 2026

Meta, Other Tech Giants Face Thousands of Lawsuits After Bellwether Social Media Addiction Trial

March 26, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Sabalenka and Rybakina to clash again in Miami semi-final

March 27, 2026

Transgender athletes barred from female category events at Olympics

March 26, 2026

PM urged to postpone ‘unconstitutional’ PHF Congress meeting

March 25, 2026

Players vow to deliver despite empty stands in PSL 11

March 25, 2026
Our Picks

XRP Price Turns Soft, Red Signals Renewed Bearish Pressure

March 27, 2026

Ethereum Price Drops Near $2,020, Downside Pressure Continues to Build

March 26, 2026

Bitcoin Price Breaks Below $70K, Sellers Eye Further Downside

March 26, 2026

Recent Posts

  • XRP Price Turns Soft, Red Signals Renewed Bearish Pressure
  • Sabalenka and Rybakina to clash again in Miami semi-final
  • Netflix Bungles the First ABS Call in MLB History
  • In France, Rubio will try to sell Iran war to skeptical G7 allies
  • Ethereum Price Drops Near $2,020, Downside Pressure Continues to Build

Recent Comments

No comments to show.
MNK News
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 mnknews. Designed by mnknews.

Type above and press Enter to search. Press Esc to cancel.