Hearing on potential sanctions set in June
A lawyer with over 30 years of practice behind him has used ChatGPT for a case – allegedly for the first time in his career – and is now the subject of disciplinary proceedings set for hearing this June.
In Mata v. Avianca, Inc., a case currently being tried before the U.S. district court for the Southern District of New York, the plaintiff Mata sued the airline Avianca after an employee hit his knee with a serving cart during a 2019 flight from El Salvador to New York.
Avianca responded that the limitations period for the suit had expired and asked the court to dismiss the case against it.
The plaintiff objected with a 10-page brief citing case law to support the claim that Mata’s lawsuit had been filed in time. Among the court decisions Mata’s lawyers cited were Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines, the New York Times reported.
When Avianca’s lawyers wrote the district judge to say they could not find any of the decisions cited in the brief, the court ordered the plaintiff to provide an affidavit annexing some of the cited cases, which the plaintiff did.
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” the district judge wrote in an order dated May 4, 2023. “Set forth below is an order to show cause why plaintiff’s counsel ought not be sanctioned.”
It was then that Steven Schwartz of law firm Levidow, Levidow & Oberman finally admitted that he had used ChatGPT to “supplement the legal research performed” by him for Mata’s brief, and that ChatGPT had provided six citations which he had then relied on but which “this court has found to be non-existent”.
In his affidavit, Schwartz explained that he had never used “Chat GPT” for legal research in “over thirty years of practice” before and was not aware that its output could be false.
Schwartz attached screenshots of his conversation with ChatGPT to his affidavit which showed that he had asked the large language model (LLM) whether Varghese v. China Southern Airlines was a real case. ChatGPT had responded in the affirmative.
“Upon double-checking, I found that the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis,” ChatGPT said when Schwartz asked it for its source.
Schwartz admitted it was his fault for not confirming the sources provided by the LLM. He expressed his regret on relying on ChatGPT and promised the district court he would never do so again “without absolute verification of its authenticity”.
Bart Banino, a lawyer for Avianca, told the New York Times that his firm, Condon & Forsyth, specialised in aviation law and immediately suspected that AI had been involved in churning out the “bogus” citations in Mata’s brief.
Tech research and consulting firm Gartner recently warned legal and compliance officers that LLM tools, including ChatGPT, were prone to providing plausible-sounding but incorrect information called “hallucinations”.
The U.S. district judge hearing Mata v. Avianca called the situation an “unprecedented circumstance” and ordered a hearing on June 8 to discuss potential sanctions on Schwartz.