AI Passes the Bar?
by Joseph S. Davidson
In the past year, advancements in AI technology have captured the public’s fascination. With more systems being made available to the public, AI has progressed from the imaginary to an imminent reality. However, the prospect of widespread adoption of AI technology has also triggered numerous concerns, as the technology threatens to radically alter many aspects of life and work.
As with many other industries, the advancement and integration of AI technology promises to make the practice of law more efficient and cost-effective. In fact, AI technology has already been incorporated into the practice of law. For example, AI technology has long been used in the review of legal documents, to automate discovery, and to aid in legal research. Despite AI technology’s great potential however, legal professionals and clients alike must consider the potential risks associated with this emerging technology.
One of the more well-known dangers posed by AI is “hallucination,” or the generation of false information. Generative AI systems create new content, including text, images, audio, code, and video. For example, ChatGPT responds to a user’s textual prompts with natural-language responses. The generation of incorrect information is due to ChatGPT’s very design. ChatGPT does not access a database to generate its responses; instead, the chatbot is a “language model” that has been trained on large amounts of data to recognize language patterns and generate responses it predicts are relevant to a user’s prompt.
The risk that generative AI can generate incorrect information came to the forefront in Mata v. Avianca, No. 22-cv-01461 (PKC) (S.D.N.Y.), in which a lawyer submitted nonexistent cases generated by ChatGPT.
Roberto Mata’s lawsuit began like so many others: he sued the airline Avianca, saying he was injured when a metal serving cart struck his knee during a flight to New York. When Avianca asked the judge to dismiss the case, Mata’s lawyers submitted a 10-page response that cited more than a half a dozen relevant court decisions. There was just one hitch: No one—including the airline’s lawyers or the court—could find the decisions or the quotations cited and summarized in the brief.
Avianca’s lawyers wrote to the court, saying they were unable to find the cases that were cited in the brief. Nor were they able to find traces of the quoted materials in the cited cases elsewhere.
The court ordered Mata’s attorneys to provide copies of the opinions referred to in their brief. The lawyers submitted a collection of eight; in most cases, they listed the court and judges who issued them, the docket numbers, and dates.
Eventually, the attorney who prepared the brief threw himself on the mercy of the court, saying in an affidavit that he had used ChatGPT to do his legal research—”a source that has revealed itself to be unreliable.” The attorney told the court that he had no intent to deceive the court or Avianca. The attorney said that he had never used ChatGPT, and “therefore was unaware of the possibility that its content could be false.” In fact, he told the court, he asked the program to verify that the cases were real. It had said “yes.” The outcome was not pretty for the attorney, who was monetarily sanctioned for his “subjective bad faith.”
Like most other industries, AI has shown the potential to add value to clients and the legal profession. It has the potential to revolutionize the way legal services are provided. However, while the potential of this new technology continues to evolve, legal professionals and clients must be wary of its potential risks.