AI Misstep: Trump’s Ex-Lawyer Misuses Google Bard for Legal Citations

AI Misstep: Trump's Ex-Lawyer Misuses Google Bard for Legal Citations

Michael Cohen, a former attorney for Donald Trump, has made headlines after mistakenly using Google Bard, an AI chatbot, for legal research.

This misstep resulted in the submission of incorrect legal citations in court documents, shedding light on the emerging challenges and risks of using artificial intelligence in legal proceedings.

The AI Dilemma in Legal Research

The issue came to light when Cohen, preparing for his role as a witness against Trump in upcoming trials, sent Google Bard-generated legal citations to his lawyer, David Schwartz. Cohen, who is no longer actively practicing law, misunderstood Google Bard as a powerful search engine rather than recognizing it as a generative AI service akin to Chat-GPT. Consequently, the motion filed contained invalid citations.

This incident raises concerns about the growing dependency on AI for legal research. It underscores the need for a deeper understanding of AI technologies among legal professionals and highlights the critical importance of verifying the accuracy of AI-generated information. In Cohen’s case, the responsibility extended to his attorney, Schwartz, who should have double-checked the citations before their inclusion in legal filings.

AI-Induced Errors in Legal Cases

Cohen’s mishap is not an isolated event but part of a growing trend of AI-related errors in the legal sector. In a separate incident, Steven Schwartz, an attorney from the New York law firm Levidow, Levidow & Oberman, faced judicial criticism for incorporating AI-generated, inaccurate court citations in his legal work. Despite claiming his inexperience with ChatGPT, the judge reprimanded him for the inaccuracies in his submissions.

Further illustrating this trend, attorney Steven Schwartz, in a lawsuit against Avianca Airlines, used ChatGPT for legal research. This led to the discovery of factual errors and inconsistencies in the case documentation by the presiding judge, who described some of the submitted cases as “bogus,” featuring nonexistent cases and mixed-up docket numbers.

Role of Human Oversight in AI Use

These incidents underscore a crucial aspect of AI integration into professional workforces, especially in fields as critical as law. While AI tools like ChatGPT have the potential to revolutionize research and analytical processes, they are not without flaws. In embracing these tools, the legal profession must do so cautiously, ensuring that human expertise and judgment remain paramount in interpreting and applying AI-generated information.

Lawyers and legal researchers should view AI as a supportive tool that enhances but does not replace the need for thorough human-led research and verification. As AI advances, its role in various industries, including law, is expected to expand. However, these examples serve as a cautionary tale of the importance of a balanced approach, where AI’s efficiency is utilized without undermining the accuracy and integrity of professional work.

The integration of AI into legal research presents both opportunities and challenges. As demonstrated by these incidents, there is a critical need for legal professionals to develop a deeper understanding of AI technologies and their limitations. This includes recognizing that AI tools, while powerful, require careful oversight and still need to fully replicate the nuanced judgment and expertise of a human legal professional.

As AI technology continues to evolve, its incorporation into the legal field must be approached enthusiastically for its potential and with caution for its limitations. This balanced approach will ensure that AI is a beneficial tool in legal research, augmenting human expertise rather than undermining it.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.