Close Menu
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

OpenAI Boss Sam Altman’s San Francisco Home Targeted in Second Attack in 3 Days

April 13, 2026

Retro handheld maker Anbernic’s latest device has a swiveling display

April 13, 2026

Gymshark Opens Miami Gym in Bold Pivot From Retail

April 13, 2026
Facebook X (Twitter) Instagram
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
MNK NewsMNK News
  • Home
  • AI & Technology
  • Politics
  • Business
  • Cryptocurrency
  • Sports
  • Finance
  • Fitness
  • Gadgets
  • World
  • Marketing
MNK NewsMNK News
Home » OpenAI Supports Illinois Bill to Limit AI Companies’ Liability for Mass Casualty Incidents, Financial Disasters
AI & Technology

OpenAI Supports Illinois Bill to Limit AI Companies’ Liability for Mass Casualty Incidents, Financial Disasters

MNK NewsBy MNK NewsApril 13, 2026No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI is backing an Illinois state bill that would protect AI companies from legal responsibility when their technology contributes to severe societal harms, including mass deaths or catastrophic financial losses.

Wired reports that the ChatGPT maker has testified in favor of Illinois Senate Bill 3444, legislation that would shield frontier AI developers from liability for critical harms caused by their models under certain conditions. The bill represents what several AI policy experts describe as a notable evolution in OpenAI’s legislative approach, which until now had focused primarily on opposing measures that would increase liability for AI companies.

SB 3444 would define critical harms as incidents causing death or serious injury to 100 or more people, or at least $1 billion in property damage. Under the proposed law, AI labs would be protected from liability as long as they did not intentionally or recklessly cause such an incident and had published safety, security, and transparency reports on their websites. The bill defines frontier models as those trained using more than $100 million in computational costs, a threshold that would likely apply to major American AI company including OpenAI, Google, xAI, Anthropic, and Meta.

The legislation specifically identifies several scenarios of concern to the AI industry, including the use of AI by malicious actors to develop chemical, biological, radiological, or nuclear weapons. It also covers situations where an AI model independently engages in conduct that would constitute a criminal offense if committed by a human, provided such actions lead to the extreme outcomes defined in the bill.

Jamie Radice, an OpenAI spokesperson, said in an emailed statement: “We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois. They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”

Caitlin Niedermeyer, a member of OpenAI’s Global Affairs team, delivered testimony supporting the bill and echoed the call for federal AI regulation. Her arguments aligned with the Trump administration’s opposition to inconsistent state-level AI safety laws. Niedermeyer emphasized the importance of avoiding what she called “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” She also suggested that state laws can be valuable when they “reinforce a path toward harmonization with federal systems.”

“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.

Scott Wisor, policy director for the Secure AI project, expressed skepticism about the bill’s prospects. He told Wired: “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability.” Wisor pointed to Illinois’ history of aggressive technology regulation, including its landmark Biometric Information Privacy Act passed in 2008 and more recent legislation limiting AI use in mental health services, as evidence that the state may be unlikely to pass a liability shield for AI companies.

The broader legal landscape around AI liability remains largely undefined in the United States. No federal or state laws have specifically established whether AI model developers can be held responsible for catastrophic harms caused by their technology. In the absence of federal legislation, some states have moved in the opposite direction from Illinois’ proposed bill. California’s SB 53 and New York’s Raise Act both require AI developers to submit safety and transparency reports, increasing rather than decreasing accountability measures.

The question of AI liability extends beyond mass casualty events to individual harms as well. OpenAI currently faces lawsuits from families of children who died by suicide after allegedly forming unhealthy relationships with ChatGPT.

Breitbart News previously reported that OpenAI faces a lawsuit from the families of victims from the February Canadian school shooting that claims the company knew the shooter was preparing an attack, but did not contact authorities.

Author Wynton Hall argues in his instant bestseller, Code Red: The Left, the Right, China, and the Race to Control AI, that AI isn’t just a tool, it is political power:

The conservative response, Hall argues, cannot be indifference. “Some dismiss AI as overhyped Silicon Valley PR,” he writes. “Others reduce it to a mere tool, a glorified spellchecker or a turbocharged Google search. A few shrug it off as sci-fi silliness or a ‘shiny object’ they’re too busy to learn or worry about. I respectfully, yet vehemently, disagree.” Hall contends that AI’s architects “are building systems capable of muzzling dissent, manipulating narratives, disrupting economies, displacing jobs, evangelizing leftist ideologies, unleashing new national security threats, warping human relationships, cementing educational indoctrination, maximizing surveillance capitalism, and controlling media and information on an unprecedented scale.”

Read more at Wired here.

Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
MNK News
  • Website

Related Posts

OpenAI Boss Sam Altman’s San Francisco Home Targeted in Second Attack in 3 Days

April 13, 2026

Nolte: Oscar-Winner Steven Soderbergh Eager to Use AI in Films

April 13, 2026

Anthropic Seeks Guidance from Christian Leaders on AI Ethics and Morality

April 12, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Fury cruises to decision victory over Makhmudov

April 13, 2026

Jannik Sinner beats Alcaraz in Monte Carlo final to reclaim top spot in tennis rankings

April 12, 2026

Zalmi thrash hapless Qalandars to go top

April 11, 2026

POA honours Rizwan Aftab Ahmed with highest shield

April 11, 2026
Our Picks

What The Bitcoin Relief Rally Above $71,000 Says About Where The Price Is Headed

April 13, 2026

It’s Too Early For A Bitcoin Price Bottom, Here’s What You Should Be Looking At

April 13, 2026

Why Is Bullishness Around Hyperliquid On The Rise Again?

April 13, 2026

Recent Posts

  • OpenAI Boss Sam Altman’s San Francisco Home Targeted in Second Attack in 3 Days
  • Retro handheld maker Anbernic’s latest device has a swiveling display
  • Gymshark Opens Miami Gym in Bold Pivot From Retail
  • What The Bitcoin Relief Rally Above $71,000 Says About Where The Price Is Headed
  • OpenAI Supports Illinois Bill to Limit AI Companies’ Liability for Mass Casualty Incidents, Financial Disasters

Recent Comments

No comments to show.
MNK News
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About US
  • Advertise
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 mnknews. Designed by mnknews.

Type above and press Enter to search. Press Esc to cancel.