OpenAI is backing an Illinois state bill that would protect AI companies from legal responsibility when their technology contributes to severe societal harms, including mass deaths or catastrophic financial losses.
Wired reports that the ChatGPT maker has testified in favor of Illinois Senate Bill 3444, legislation that would shield frontier AI developers from liability for critical harms caused by their models under certain conditions. The bill represents what several AI policy experts describe as a notable evolution in OpenAI’s legislative approach, which until now had focused primarily on opposing measures that would increase liability for AI companies.
SB 3444 would define critical harms as incidents causing death or serious injury to 100 or more people, or at least $1 billion in property damage. Under the proposed law, AI labs would be protected from liability as long as they did not intentionally or recklessly cause such an incident and had published safety, security, and transparency reports on their websites. The bill defines frontier models as those trained using more than $100 million in computational costs, a threshold that would likely apply to major American AI company including OpenAI, Google, xAI, Anthropic, and Meta.
The legislation specifically identifies several scenarios of concern to the AI industry, including the use of AI by malicious actors to develop chemical, biological, radiological, or nuclear weapons. It also covers situations where an AI model independently engages in conduct that would constitute a criminal offense if committed by a human, provided such actions lead to the extreme outcomes defined in the bill.
Jamie Radice, an OpenAI spokesperson, said in an emailed statement: “We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois. They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”
Caitlin Niedermeyer, a member of OpenAI’s Global Affairs team, delivered testimony supporting the bill and echoed the call for federal AI regulation. Her arguments aligned with the Trump administration’s opposition to inconsistent state-level AI safety laws. Niedermeyer emphasized the importance of avoiding what she called “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” She also suggested that state laws can be valuable when they “reinforce a path toward harmonization with federal systems.”
“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.
Scott Wisor, policy director for the Secure AI project, expressed skepticism about the bill’s prospects. He told Wired: “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability.” Wisor pointed to Illinois’ history of aggressive technology regulation, including its landmark Biometric Information Privacy Act passed in 2008 and more recent legislation limiting AI use in mental health services, as evidence that the state may be unlikely to pass a liability shield for AI companies.
The broader legal landscape around AI liability remains largely undefined in the United States. No federal or state laws have specifically established whether AI model developers can be held responsible for catastrophic harms caused by their technology. In the absence of federal legislation, some states have moved in the opposite direction from Illinois’ proposed bill. California’s SB 53 and New York’s Raise Act both require AI developers to submit safety and transparency reports, increasing rather than decreasing accountability measures.
The question of AI liability extends beyond mass casualty events to individual harms as well. OpenAI currently faces lawsuits from families of children who died by suicide after allegedly forming unhealthy relationships with ChatGPT.
Breitbart News previously reported that OpenAI faces a lawsuit from the families of victims from the February Canadian school shooting that claims the company knew the shooter was preparing an attack, but did not contact authorities.
Author Wynton Hall argues in his instant bestseller, Code Red: The Left, the Right, China, and the Race to Control AI, that AI isn’t just a tool, it is political power:
The conservative response, Hall argues, cannot be indifference. “Some dismiss AI as overhyped Silicon Valley PR,” he writes. “Others reduce it to a mere tool, a glorified spellchecker or a turbocharged Google search. A few shrug it off as sci-fi silliness or a ‘shiny object’ they’re too busy to learn or worry about. I respectfully, yet vehemently, disagree.” Hall contends that AI’s architects “are building systems capable of muzzling dissent, manipulating narratives, disrupting economies, displacing jobs, evangelizing leftist ideologies, unleashing new national security threats, warping human relationships, cementing educational indoctrination, maximizing surveillance capitalism, and controlling media and information on an unprecedented scale.”
Read more at Wired here.
Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.

