Injury Attys Sanctioned Over AI-Hallucinated Case Citations

(February 24, 2025, 10:28 PM EST) -- A Wyoming federal judge overseeing a personal injury lawsuit against Walmart sanctioned the plaintiffs' attorneys from Morgan & Morgan PA and the Goody Law Group after they filed pretrial motions containing case law hallucinated by artificial intelligence, but acknowledged Monday their "remedial steps, transparency and apologetic sentiments."

In his order, U.S. District Judge Kelly H. Rankin remarked on the potential future benefits of using artificial intelligence in the legal industry, including speeding up research and the drafting of filings, but cautioned that the current technology has its shortcomings — such as in this case when it hallucinated eight of the nine cases cited in the motions in limine.

"The instant case is simply the latest reminder to not blindly rely on AI platforms' citations regardless of profession," Judge Rankin said. "As attorneys transition to the world of AI, the duty to check their sources and make a reasonable inquiry into existing law remains unchanged."

According to the order, Rudwin Ayala of Morgan & Morgan PA was the attorney who actually drafted the motions at issue, while T. Michael Morgan, the son of Morgan & Morgan founder John Morgan, and Taly Goody of Goody Law Group signed their names without reviewing the documents.

Judge Rankin revoked Ayala's pro hac vice status, which allows an attorney to participate in a jurisdiction in which they are not licensed to practice, and ordered him to pay a fine of $3,000, which was based on the number of hallucinated cases compared to real cases, his access to legal research resources, and the fact that legal professionals have known about these AI hallucination issues "for quite some time," the order states.

In addition, Morgan and Goody are to each pay a fine of $1,000 because even though they did not have a hand in the drafting of the filing, they failed to review the documents before adding their signatures, according to the order.

"The key takeaway for attorneys is simple: Make a reasonable inquiry into the law before signing (or giving another permission to sign) a document," Judge Rankin said. "If an attorney does not do so, then they should not sign the document."

"However, if the attorney decides to risk not making reasonable inquiry into the existing law and signs, then they may be subject to sanctions," he added.

The issue came up a few weeks ago when defendants Walmart and Jetson Electric Bikes pointed out in their opposition brief that they could not identify the cited cases through Westlaw, LexisNexis, PACER and Google, but were able to find some of the citations on ChatGPT. Law360 is owned by LexisNexis Legal & Professional, a RELX Group company.

Except for one case, "plaintiffs' cited cases seemingly do not exist anywhere other than in the world of artificial intelligence," the defendants said, asking the court to "wholly disregard" the cases and deny the motions in limine.

The underlying litigation was filed in July 2023 by Stephanie and Matthew Wadsworth on behalf of their four minor children after their allegedly "defective and unreasonably dangerous hoverboard" exploded and caught fire in their home.

The family says the product, a Jetson Plasma Iridescent Hoverboard, is defective, hazardous and malfunctioned when used in the intended manner.

Judge Rankin issued an order to show cause why the attorneys should not be sanctioned or face other disciplinary action, telling them to "provide a true and accurate copy of all cases used in support" of the motions at issue.

The attorneys then filed a notice that they were withdrawing the motions at issue, and expressed "great embarrassment" when they told the judge the motions did, indeed, contain case law hallucinated by artificial intelligence.

In his order Monday, Judge Rankin determined that Ayala drafted the motions in limine at the direction of Morgan, who had no real involvement in the drafting process, while Goody, who is local counsel in the matter, was not involved in the drafting at all. But all three attorneys added their signatures to the bottom of the motions, the order states.

Ayala then uploaded them to an in-house artificial intelligence database created by Morgan & Morgan and was instructed it to "add more case law regarding motions in limine," among other things, according to the order. He told the court this was his first time using AI in such a way, the order states.

"These search inquiries apparently generated the fake cases," Judge Rankin said. "Without verifying their accuracy, Mr. Ayala included the fake cases in the motions in limine. He first learned the cases were questionable when the court entered the order to show cause."

"Mr. Ayala admits that the cases are non-existent and his reliance on the AI platform was misplaced," the judge added.

However, the attorneys have taken remedial steps, including promptly withdrawing the motions, "being honest and forthcoming about the use of AI," paying the defendants' counsel fees for defending the motions in limine, and implementing policies and training to prevent another such occurrence in the future, he said.

"The court appreciates respondents' remedial steps, transparency and apologetic sentiments," Judge Rankin said. "Hopefully situations like this do not become common for the judiciary, but should they occur again, the court recommends attorneys should — at the very least — follow these steps to remediate the situation prior to the issuance of any sanction."

Walmart declined to comment Monday. Plaintiffs' counsel could not be reached for comment.

This is not the first time attorneys have had trouble with citations gone wrong in legal briefs that seemingly involve artificial intelligence.

Also on Monday, a federal magistrate judge recommended that a Texas lawyer be hit with a $15,000 personal sanction and other discipline for filing three briefs using generative AI that included fake citations in an Indiana ERISA case.

Last month, a Minnesota federal judge threw out an erroneous expert declaration prepared by a Stanford University expert on artificial intelligence in litigation over the state's law on deepfakes, finding that the fake, AI-generated sources in his declaration "shatters his credibility with this court."

"The irony," remarked U.S. District Judge Laura M. Provinzino in her order excluding the declaration.

In 2023, two New York personal injury attorneys made headlines by submitting a ChatGPT-generated brief with fake case citations. They were sanctioned for their mistake.

And other courts, including Texas and Missouri state appeals courts, have called out litigants for submitting seemingly AI-generated court filings with fake case citations. And a Manhattan federal judge criticized a law firm for using ChatGPT to support its attorney fee request of more than $100,000.

The plaintiffs are represented by T. Michael Morgan of Morgan & Morgan PA, and Taly Goody of Goody Law Group.

Walmart and Jetson are represented by Eugene M. LaFlamme, Jared B. Giroux and Jillian L. Lukens of McCoy Leavitt Laskey LLC, and Timothy M. Stubson, Brandon E. Pryde and Holly L. Tysse of Crowley Fleck PLLP.

The case is Wadsworth et al. v. Walmart Inc. et al., case number 2:23-cv-00118, in the U.S. District Court for the District of Wyoming.

--Additional reporting by Emily Sawicki, Hailey Konnath, Sarah Martinson, Ryan Boysen, Rose Krebs and Madison Arnold. Editing by Adam LoBelia.

For a reprint of this article, please contact reprints@law360.com.

Attached Documents

Useful Tools & Links

Related Sections

Case Information

Case Title

Wadsworth v. Walmart Inc et al


Case Number

2:23-cv-00118

Court

Wyoming

Nature of Suit

Personal Inj. Prod. Liability

Judge

Kelly H Rankin

Date Filed

July 06, 2023

Law Firms

Companies

Government Agencies

Judge Analytics

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!