The group will investigate the use of AI in generating court documents and possible procedure rule changes
The UK's Civil Justice Council has established a working group focused on the use of artificial intelligence in generating court documents, reported the Law Society Gazette.
The group will investigate possible changes to procedure rules. Deputy head of civil justice Lord Justice Colin Birss hinted at the group's creation last week during a panel at London International Disputes Week.
Senior jurists on the panel agreed that the judiciary needed to lead the charge on using AI to streamline court processes. However, AI users are responsible for fact-checking its output.
"You should be taking personal responsibility for what goes in your name, and that applies whether you’re a judge or you’re a lawyer. Looked at that way round, lawyers producing documents with hallucinated case references [would not be] a problem. You shouldn’t be putting anything to a court that you’re not prepared to put your name to," said panellist Lord John Thomas of Cwmgiedd, who is a UK Court of Appeal judge, in a statement published by the Gazette.
Birss added that lawyers needed to actually read documents before asking AI to generate a summary.
"You can use AI [to summarise], but you can only use it if you have read the document. What you can’t do, is not read the document, and then get AI to summarise it. That’s crazy," he said in a statement published by the Gazette.
Last month, the CJC had established a group to determine whether court rules regarding AI use had to be implemented.
"We may do, we may not. I suspect we’ll need some adjustments to the rules. We’ve got a practice direction on witness statements which probably could do with a look," Birss said in a statement published by the Gazette.
A CJC spokesperson told the Gazette that terms of reference would be released in the coming weeks.
At present, England and Wales-based judges use personal computers that can access large-language model AI software. Judiciary guidance was updated in recent weeks to include the terms "hallucination" and "AI agent"; the guidance also provided advice on identifying AI-generated submissions.