Search

Vancouver Attorney Faces Scrutiny After AI Chatbot Generates Fake Case Law

Vancouver Attorney Faces Scrutiny After AI Chatbot Generates Fake Case Law

A Vancouver lawyer faces a probe for using ChatGPT in court, leading to the submission of ‘fictitious’ case law in a child custody dispute.

A Canadian attorney is facing criticism after, in the most recent installment highlighting the dangers of unproven legal technology, her AI chatbot for legal research produced “fictitious” cases.

Also read: Why Musk Files Lawsuit Against OpenAI and Sam Altman

Chong Ke, a Vancouver attorney who is currently under investigation for her behavior, is said to have used ChatGPT to draft legal arguments for a child custody case that was heard by the British Columbia Supreme Court.

Ke presents Chatbot-generated case laws

According to court documents, Ke represented a father who desired to travel abroad with his children but was locked in a separation dispute with the children’s mother.

Ke is said to have asked ChatGPT to provide examples of prior case law that might apply to her client’s circumstances. She submitted two of the three results that the OpenAI-developed chatbot produced to the court.

However, the opposing counsel identified inconsistencies and could find no evidence of the cases Ke had mentioned. Ke then took down the submissions she had made, blaming the mistake on her ignorance of the accuracy of content produced by artificial intelligence.

Ke said in an email to the court that she had no idea that these two cases could be erroneous. She added that after her colleague pointed out the fact that these could not be located, she did research of her own and could not detect the issues either. She stated that she had no intention to mislead the opposing counsel or the court and sincerely apologized for the mistake that she made.

Using Chatbots

Chatbots are trained on massive amounts of data, and despite their popularity, they are prone to errors, known as “hallucinations.”

Ke’s actions, according to the lawyers representing the mother, were “reprehensible and deserving of rebuke” because they led to “considerable time and expense” to determine if the cases she cited were real.

However, their request for special costs to be awarded was denied by the presiding judge, who stated that such an “extraordinary step” would require “a finding of reprehensible conduct or an abuse of process” by the lawyer.

Furthermore, some X (formerly Twitter) users have expressed their displeasure with using chatbots for law cases.

Justice David responds

According to Justice David Masuhara, citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court. He added that if left unchecked, it can lead to a miscarriage of justice.

He discovered that opposing counsel was “well-resourced” and had already produced “volumes” of materials in the case. He said there was no chance here that the two fake cases would have slipped through.

Masuhara asserted that Ke was “naive about the risks of using ChatGPT” and that her actions generated “significant negative publicity,” but he also observed that Ke made an effort to fix her mistakes.

He said that he did not find that she had the intention to deceive or misdirect. He accepted the sincerity of Ms. Ke’s apology to counsel and the court. He added that her regret was evident during her appearance and oral submissions in court.

The incident highlights more general discussions about the use of AI in the legal field. It draws attention to the difficulties in guaranteeing the dependability and correctness of AI-generated content in court.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×