Your “AI” is Hallucinating

You might have heard the story of the attorney who was sanctioned for using legal research generated by ChatGPT, an open Artificial Intelligence (“AI”) website, and filed the AI brief into a lawsuit he was involved in. When the court researched the caselaw cited in the brief, the cases did not exist. Much to the attorney’s surprise, the AI made the caselaw up. This phenomenon is commonly referred to as an AI Hallucination. The term hallucination is fitting because in some sense, the AI is creating a utopic response to a question it does not have an answer for.

There are many reasons why the attorney should have been sanctioned for what he did, but this article is not about his wrongdoings, it is about how the practice of vetting AI Hallucinations is now another important weapon needed in the practicing attorney’s war chest.

Just recently, I encountered another way in which AI Hallucinations came into play in my day-to-day practice. I was involved in a case where we needed to file a motion. In support of the motion, I also needed to draft a memorandum of law. As is customary in drafting legal memoranda, I spent a several hours researching caselaw on Westlaw (without using AI). I drafted a memorandum of law from the research I compiled and provided it to my client for review. Much to my surprise, I received a lengthy email back from my client which included five cases with factual summaries for each.

My surprise was not in the length of the email but in the caselaw provided by my client. These cases seemed to be exactly on point to my issue and extremely helpful to our position. At first, I was disappointed my research had not found any of the cases cited. I presumed this was because the scope of my research was limited to Minnesota and the Eighth Circuit. However, when I went back to Westlaw and specifically looked up each case cited by my client, I could not find even one of them. It appeared to me that portions of the case names and factual summaries were selectively pulled from actual cases and combined by the AI to create an imaginary case with an imaginary outcome. In other words, these were hallucinated cases.

After realizing the cases cited did not actually exist, I broached the subject with my client. I directly asked if the cases cited were the product of an AI search. Not surprisingly, my client admitted to using ChatGPT to research the topic, and apologized. I was not offended, but simply wanted to let the client understand that the open AI programs available currently are just not up to par with the AI programs specifically designed for legal research, like Westlaw’s AI-Assisted research tool. I also highlighted the extra research project my client caused which resulted in more time spent by me and therefore, more legal fees owed by the client. I am almost certain this scenario will happen again, and it could happen to you.

The above stories highlight a few things about open AI legal research. First, and this should go without saying – lawyers must always cite precedential caselaw in support of their claims or defenses. Even if you use AI to research a topic, one must always check to confirm the caselaw actually exists.  Second, as a practicing lawyer, you cannot presume the caselaw another lawyer cites is legitimate. Traditionally, the prudent lawyer might not fact-check every single citation of their opposing counsel’s legal research. But now, with the implementation and use of AI in the legal field, the practicing attorney should absolutely check every citation. In addition, your clients will now be more inclined to use AI to research answers to their legal questions. This will inevitably cause you, as the practicing attorney, to check each case cited by your client. Especially, when the facts and law seem too good to be true.