A partner at the prestigious Wall Street law firm Sullivan & Cromwell has issued a formal apology to a federal bankruptcy judge after discovering that a court filing contained numerous fabricated legal citations and other errors generated by AI.
Business Insider reports that a senior partner at Sullivan & Cromwell, sent a letter last week to Chief Judge Martin Glenn in Manhattan acknowledging that a previous filing submitted by the firm contained inaccurate citations and what he described as AI hallucinations. The filing was made on behalf of Prince Global Holdings, the bankrupt firm that Sullivan & Cromwell represented in the case.
In his letter, Andrew Dietderich, co-head of Global Finance & Restructuring for Sullivan & Cromwell, explained the nature of the problem. “‘Hallucinations’ are instances in which artificial intelligence tools fabricate case citations, misquote authorities, or generate non-existent legal sources,” he wrote. “We deeply regret that this has occurred.”
The letter included a chart that detailed the specific problems with the motion. The document contained incorrect case names and numbers, along with quotes that appeared to be completely fabricated rather than taken from actual legal precedents. These errors represented a significant breach of the standards expected in federal court submissions, where accuracy in citing legal authority is fundamental to the judicial process.
The mistakes were not caught internally by Sullivan & Cromwell but were instead identified by attorneys from Boies Schiller Flexner, the law firm representing creditors in the bankruptcy case. Dietderich noted that he had thanked the opposing firm for identifying the errors and offered his apologies for the oversight.

Sullivan & Cromwell, founded 140 years ago, employs more than 1,000 attorneys and ranks among the most prominent law firms on Wall Street. According to Dietderich, the firm maintains comprehensive policies governing the use of artificial intelligence in legal work and has established safeguards specifically designed to prevent exactly this type of error from reaching the courts. However, he acknowledged that these procedures were not followed in this instance, and the firm’s review process for citations also failed to catch the fabricated material before submission.
Breitbart News previously reported that the head of another law firm recently stung by AI hallucinations, Morgan & Morgan, called the threat of AI to the legal profession “Nauseatingly fightening” in a letter to his huge firm:
In an internal letter shared in a court filing, Morgan & Morgan’s chief transformation officer cautioned the firm’s more than 1,000 attorneys that citing fake AI-generated cases in court documents could lead to serious consequences, including potential termination. This warning comes after one of the firm’s lead attorneys, Rudwin Ayala, cited eight cases in a lawsuit against Walmart that were later discovered to have been generated by ChatGPT, an AI chatbot.
The incident has raised concerns about the growing use of AI tools in the legal profession and the potential risks associated with relying on these tools without proper verification. Walmart’s lawyers urged the court to consider sanctions against Morgan & Morgan, arguing that the cited cases “seemingly do not exist anywhere other than in the world of Artificial Intelligence.”
As law firms struggle with court filings filled with fake case citations and fictitious quotes, America as a whole is waking up to the fact that AI presents both great opportunity and great danger to our country and culture. Breitbart News social media director Wynton Hall has written his instant bestseller Code Red: The Left, the Right, China, and the Race to Control AI to serve as the definitive guide on how the MAGA movement can create positions on AI that benefit humanity without handing control of our nation to the leftists of Silicon Valley or allowing the Chinese to take over the world.
Read more at Business Insider here.
Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.

