High‑profile Chapter 15 missteps highlight how AI 'hallucinations' can penetrate even elite firms’ safeguards
Sullivan & Cromwell, one of Wall Street’s most prestigious law firms, has issued a rare public mea culpa to a New York bankruptcy judge after discovering that an emergency motion it filed contained fabricated and misquoted legal authorities generated by artificial intelligence.
In an April 18 letter to Chief Judge Martin Glenn of the US Bankruptcy Court for the Southern District of New York, partner Andrew Dietderich said the firm “deeply" regrets that its April 9 emergency motion for provisional relief in the Chapter 15 proceedings of Prince Global Holdings Ltd. included “inaccurate citations and other errors,” some of which were “artificial intelligence (‘AI’) ‘hallucinations.’”
The filing, made on behalf of joint provisional liquidators for the Prince group, sought urgent relief under Chapter 15 of the U.S. Bankruptcy Code. According to Dietderich’s letter, AI tools used in drafting the motion fabricated case citations, misquoted authorities, and generated non‑existent legal sources – errors that slipped through the firm’s internal review and into the court record.
Dietderich, co‑head of the firm’s restructuring practice, acknowledged that Sullivan & Cromwell has “comprehensive policies and training requirements” governing the use of generative AI in legal work, including mandatory training modules that stress the risk of hallucinations and instruct lawyers to “trust nothing and verify everything.” The firm’s office manual, he noted, requires lawyers to independently check all AI‑generated answers, citations and work product before anything is sent to a court, regulator, client or other external party.
“Notwithstanding these safeguards, the Firm’s protocols were not followed here,” Dietderich wrote, adding that the firm’s ordinary citation‑checking processes also failed to catch the invented authorities and other mistakes. He took personal responsibility for the errors under the court’s local equivalent of Rule 11 and apologized “on behalf of our entire team” for the burden placed on the court and other parties.
The letter was accompanied by a detailed Schedule A cataloguing dozens of corrections across multiple documents. These include revised case citations, corrected quotations from Chapter 15 precedents, and clarifications to references in supporting declarations and related motions. Many of the changes clean up erroneous pin cites or misdescribed holdings; others replace or remove authorities that appear to have been hallucinated by an AI tool.
Dietderich told the court that Sullivan & Cromwell has undertaken “immediate remedial measures,” including a full review of how the errors occurred and a re‑review of all filings in the Prince matter. The firm says that the review confirmed no other AI‑related issues, though it did uncover additional non‑substantive or clerical errors, which are also listed in Schedule A. A corrected version of the motion, together with a redline, is being filed, the letter states. Dietderich also said he personally telephoned opposing counsel at Boies Schiller Flexner LLP to thank them for flagging the problems and to apologize.
The episode marks one of the highest‑profile admissions to date of AI hallucinations in a major firm’s court filing, and it comes amid increasing judicial scrutiny of generative AI in litigation on both sides of the border.
In recent years, courts have repeatedly confronted lawyers who relied on AI‑generated research without verifying the results. In Wyoming, two lawyers from a US plaintiffs’ firm, Morgan & Morgan, now face potential sanctions after they cited non‑existent cases produced by an AI program in a product liability suit against Walmart involving a hoverboard toy. One lawyer admitted the mistake and apologized; the judge has not yet ruled on discipline, but the firm has since reminded its more than 1,000 lawyers that using fictitious case law in filings can be a firing offence.
Those incidents followed a string of similar cases. In 2023, two New York lawyers were fined US$5,000 for citing invented precedents in an aviation injury case. The following year, a Texas lawyer was ordered to pay US$2,000 and complete a course on AI in legal practice after submitting fabricated cases and quotations in a wrongful dismissal action. Other courts have publicly rebuked counsel – including a self‑styled misinformation expert whose reliance on AI‑generated citations in a case involving a deepfake parody of then–U.S. vice‑president Kamala Harris led a Minnesota judge to question his credibility entirely.
In Canada, the Alberta Court of Appeal’s decision in Reddy v. Saroya underscored the same point, holding that a Calgary lawyer "bears ultimate responsibility" for AI‑hallucinated citations in a factum drafted by a third‑party contractor and signalling that courts will look to counsel – not software or outside vendors – when generative AI goes wrong. And in Mazaheri v. Law Society of Ontario, a Law Society of Ontario discipline panel confronted motion materials laced with non‑existent and misleading AI‑generated authorities, rejected the lawyer’s bias arguments, and signalled that unverified reliance on tools like Grok can factor into costs and interlocutory suspension decisions.
A Thomson Reuters survey last year found that 63 percent of lawyers reported using AI at work, with 12 percent saying they did so regularly, even as experts warned that generative models are prone to inventing facts because they predict plausible‑sounding text from large datasets rather than checking source material. The American Bar Association has since reminded members that traditional duties of competence and candour extend fully to AI‑generated text, stressing that lawyers must verify any citations and factual assertions before filing. As Suffolk University law dean Andrew Perlman put it, when lawyers submit unchecked AI‑generated citations, “that’s incompetence, just pure and simple.”
Canadian regulators have issued similar cautions, emphasizing that tools such as ChatGPT or other large‑language‑model systems are no substitute for a lawyer’s professional judgment and must not be used as an unverified authority engine.