A pending case in the B.C. Supreme Court could have significant implications for the use of artificial intelligence (AI) models in Canada’s legal system. Industry experts believe that this high-profile case will provide much-needed clarity and potentially set a precedent for the use of AI, such as ChatGPT, in legal proceedings. The case involves a lawyer from a high-net-worth family dispute who allegedly submitted bogus case law produced by ChatGPT to the court. While similar cases have emerged in the United States, this is believed to be the first of its kind in Canada.
The outcome of this case could have a ripple effect, impacting several aspects of the legal system. Jon Festinger, an adjunct professor with UBC’s Allard School of Law, explained that it could influence court proceedings regarding costs and potential disciplinary actions by the Law Society. Festinger emphasizes the need for clarity around lawyers’ expected level of technological competence.
The lawyer at the center of the controversy, Chong Ke, is currently under investigation by the Law Society of B.C. The opposing lawyers in the case are also suing Ke personally for special costs, as they argue that they should be compensated for the work done to uncover the submission of fake cases.
Ke’s defense claims that her actions were an honest mistake and that there has not been a prior case in Canada where special costs were awarded in similar circumstances. Ke apologized to the court, citing her lack of awareness regarding the unreliability of the AI chatbot and her failure to verify the existence of the cases.
Experts warn that AI tools like ChatGPT have limitations that the public may not fully grasp. Vered Shwartz, an assistant professor of Computer Science at UBC, highlights the hallucination problem in these models. While the generated text appears human-like and coherent, it may contain factual errors since the models were not trained on the basis of truth but on generating text that mimics human language.
Shwartz argues that companies producing AI tools must better communicate these limitations and suggests that sensitive applications should be off-limits until proper guidelines are in place. She believes that without vigilant fact-checking, errors could easily go unnoticed.
Festinger emphasizes the need for education and training for lawyers on the appropriate use of AI tools. However, he remains optimistic about the technology’s potential. Festinger suggests that more specialized AI tools tested for accuracy in legal contexts could be available within the next decade, benefiting public access to justice.
B.C. Supreme Court Justice David Masuhara is expected to deliver a decision on Ke’s liability for costs in the coming weeks. This decision could have far-reaching implications for the use of AI models in the Canadian legal system, providing much-needed clarity and potentially setting a precedent for future cases. As the legal and technological landscapes continue to evolve, it is crucial to strike a balance between innovation and ensuring accuracy and reliability in the legal profession.