Judiciary Advisers Back Development Of AI Evidence Rules

This article has been saved to your Favorites!
The federal judiciary's advisory panel for evidentiary issues agreed Friday to develop rules aimed at strengthening scrutiny of testimony and materials derived from artificial intelligence systems, saying AI-generated information should meet the same reliability standards that apply to expert witnesses.

At a meeting in New York City, the Judicial Conference's Advisory Committee on Evidence Rules supported the drafting of two rules — one covering AI-manipulated video or audio clips known as "deepfakes" and another covering "machine-generated evidence" that can consist, for example, of an advanced software program's conclusions about DNA samples.

The rules are expected to be presented at the committee's next meeting on May 2 in Washington, D.C. Of the two incipient rules, the one covering deepfakes is less formal, with panelists generally agreeing that existing safeguards are sufficient to prevent fake footage from affecting trials or even being introduced in the first place.

"It seems to me that the greatest safeguard … is the fact that lawyers are admitted to a bar, and if they knowingly try to introduce deepfakes, they will be disbarred," U.S. Circuit Judge Richard J. Sullivan of the Second Circuit, a panel member, said during Friday's hourlong discussion of AI policymaking.

Some observers have agitated for action in recent months, and the panelists Friday reviewed multiple proposals before settling on a modest approach to deepfakes — essentially, creating a rule that will indefinitely remain in draft form, ensuring the committee isn't caught off guard if a deepfakes rule ends up actually being needed.

"There's always the concern that we're in a different world next year or the year after — technology changes, deepfakes [could] get better," Elizabeth J. Shapiro, a representative of the U.S. Department of Justice, said at Friday's meeting. "So I think it's worth having something that we're working on — to have, so to speak, in the bullpen."

There is more urgency, however, behind the rule for machine-generated evidence and "machine learning," where computer systems evolve somewhat independently. Committee members Friday fretted about powerful software systems making predictions or inferences that are converted into trial testimony without facing the typical Daubert analysis of reliability and relevance.

"If it were a real witness, and they would be subject to Daubert, then the machine should be subject … to a Daubert requirement as well. That's the basic idea," Fordham University School of Law professor Daniel J. Capra, a scholar of criminal procedure who assists the committee, said Friday.

Friday's agenda included Capra's draft of a machine-generated evidence rule, which is envisioned as Rule 707 of the Federal Rules of Evidence. The draft rule would place certain types of AI-derived evidence, such as a statistical analysis produced with AI-equipped software, under an existing section of the Federal Rules of Evidence. The existing section, Rule 702, applies to expert witnesses and has a four-part test for whether witnesses are likely to be helpful and reliable.

"When a machine draws inferences and makes predictions, there are concerns about the reliability of that process, akin to the reliability concerns about expert witnesses," according to an agenda note provided to committee members on Friday. "But if machine or software output is presented on its own, without the accompaniment of a human expert, Rule 702 is not obviously applicable. Yet it cannot be that a proponent can evade the reliability requirements of Rule 702 by offering machine output directly."

Although Judge Sullivan voiced skepticism about the dangers of deepfakes, he voiced concerns about sophisticated-sounding evidence not being substantiated.

"It sounds like the technology is [now] getting to a point where there's no witness who's able to explain how the machine can learn. This doesn't mean it's not reliable, but it means that it's not going to be able to be subjected to the traditional Daubert analysis," Judge Sullivan said. "I am more confident that we do need a rule for this one."

The judiciary's rulemaking process is notoriously slow even for relatively straightforward matters, and experts have said that the process will be especially difficult in the context of AI tools that are complex and rapidly evolving. But advisory committee members signaled that it's crucial to start tackling the challenge in earnest.

"I think … we do need to address this," U.S. District Judge Jesse Furman of the Southern District of New York, chair of the evidence panel, said Friday. "It's a real problem and will become an even bigger problem."

--Editing by Michael Watanabe.


For a reprint of this article, please contact reprints@law360.com.

×

Law360

Law360 Law360 UK Law360 Tax Authority Law360 Employment Authority Law360 Insurance Authority Law360 Real Estate Authority Law360 Healthcare Authority Law360 Bankruptcy Authority

Rankings

NEWLeaderboard Analytics Social Impact Leaders Prestige Leaders Pulse Leaderboard Women in Law Report Law360 400 Diversity Snapshot Rising Stars Summer Associates

National Sections

Modern Lawyer Courts Daily Litigation In-House Mid-Law Legal Tech Small Law Insights

Regional Sections

California Pulse Connecticut Pulse DC Pulse Delaware Pulse Florida Pulse Georgia Pulse New Jersey Pulse New York Pulse Pennsylvania Pulse Texas Pulse

Site Menu

Subscribe Advanced Search About Contact