Yukon’s Supreme Court says lawyers in the territory have to inform the court when they use artificial intelligence to produce court documents or conduct legal research.
The court issued those instructions last month, citing “legitimate concerns about the reliability and accuracy of information generated from the use of artificial intelligence” and specifically referring to Chatbot, or ChatGPT.
Manitoba’s Court of King’s Bench released similar directions last month after two New York lawyers were sanctioned by a US court when they used ChatGPT to produce a legal briefing. The document contained fake quotes and referred to as non-existent case law.
Thomas Slade, an Ottawa lawyer with Supreme Advocacy, said he was surprised Canadian courts were taking a ‘pre-emptive’ approach to potential AI threats.
He added he isn’t convinced the directive is necessary at this point.
“The incidents out of the States, I think, are somewhat isolated. I haven’t heard of anything similar happening in Canada,” he said. “Usually, practice directions come about because there’s some problem that’s recurring. In this situation, I don’t think there’s a recurring problem yet.”

Though not especially concerned about the presence of AI in courtrooms, Slade said he is worried about how people will use learning systems to navigate legal matters in their day-to-day lives as these systems become more popular.
“The biggest threat is the risk of fake or misinformation,” he said. “I think there might be people out there who don’t have the resources to pay for lawyers or don’t want to go to a lawyer for whatever reason, so then they turn to these tools, and they may not realize the information this tools that are generating are not legally sound.”
Maura Grossman, a computer science professor at the University of Waterloo and adjunct professor at Osgoode Hall Law School, said she shares those concerns about AI and misinformation.
Generative AI such as ChatGPT doesn’t operate like a search engine, she noted. Instead, it produces conversational responses by relying on content from across the internet to predict likely word combinations.
“We all know that information on the internet isn’t necessarily completely accurate, and it contains stereotypes and bias,” she said.
She tested herself when she asked ChatGPT to tell her about her husband, who is also a computer science professor at Waterloo.
“I learned that he won the Turing prize, which is essentially a Nobel Prize in computer science … except he didn’t … but when I asked what his accomplishments were, that’s one of the things that ChatGPT responded,” she said. “It also listed five books he had not written and did not include the book he had written.”

Grossman said the emerging threat of misinformation will require new approaches as AI infiltrates systems such as the courts.
“I think that’s going to pose a huge challenge to the legal system,” Grossman said. “Because now parties will have to bring in experts to argue whether something’s real or a deepfake.”
Despite her concerns about the impacts of AI on the legal system, she said the current court directions are vague and fail to define AI, which could lead to confusion about what does and does not need to be disclosed to the court.
“So, if I use Grammarly to check my grammar, do I need to disclose that? Maybe, but I don’t think that’s what the court is getting at.”
Although both Grossman and Slade share an appreciation surrounding AI and how to manage its presence in the legal system, Grossman said the benefits of these tools — such as improving efficiency for lawyers — should not be discounted.
“I think there are absolutely amazing uses,” she said. “I just think we have to put some guardrails and protections around these tools to guard against the improper use.”