Attys Worry OpenAI IP Row Will Drag On Amid AI Policy Push

(September 26, 2024, 11:00 PM EDT) -- A BigLaw attorney and consumer advocates found common ground during the seventh annual Berkeley Law AI Institute on Thursday expressing concerns that courts won't timely adjudicate copyright claims against OpenAI and others, while an FTC attorney noted the commission is already enforcing the Federal Trade Act against companies for over-hyping their AI.

The discussion on the ongoing high-stakes copyright litigation by writers and content creators against AI makers kicked off the second day of the three-day event co-hosted by the University of California, Berkeley School of Law's Executive Education program and UC Berkeley's Center for Law and Technology in Berkeley, California.

The discussion, titled "Deep Dive on Legal Issues," began with Morrison Foerster LLP partner Joe Gratz giving a brief presentation on the state of the various copyright cases in which defendants have asserted a fair-use defense. Gratz, who represents defendant OpenAI Inc. in litigation pending in California, prefaced his presentation by noting that he was speaking in his personal capacity, and not on behalf of his firm or client — which is a disclaimer many of the event's speakers have made.

Gratz pointed out that most of the litigation, which is against Microsoft Inc.-backed OpenAI, Facebook's Meta Platforms Inc. and other tech companies, is still at the early pleading stage, although some cases have entered fact discovery. He also noted that last month a federal judge delayed the first copyright jury trial involving an AI product, postponing a highly anticipated clash in which Thomson Reuters accused the tech startup ROSS Intelligence of creating a rival AI legal research platform using copyrighted material from the media company's Westlaw database.

Gratz said he thinks it's unlikely that the New York and California judges presiding over the cases will issue a quick decision on the "fact-intensive" legal question of whether using massive datasets, scraped from books and websites, to train the AI tools — dubbed large language models, or LLMs — constitutes fair use under the law.

Another panelist, Corynne McSherry, the legal director of consumer advocacy group Electronic Frontier Foundation, agreed with Gratz that a quick decision is unlikely, particularly in the litigation against Meta, since lead class counsel was recently replaced by veteran litigator David Boies days after the presiding judge criticized the lead plaintiff's attorney.

But McSherry expressed concerns that the courts will take too long to adjudicate the copyright litigation to the detriment of consumers and their IP rights.

"We're not going to get a clear legal answer for years and that has clear ramifications for AI development, because everybody is operating in the shadow of this litigation," McSherry said.

McSherry compared the AI copyright litigation to another headline-grabbing copyright case — 2008's Viacom International Inc. v. YouTube Inc.  — which took years to litigate. YouTube was initially granted summary judgment, but the case was later revived by the Second Circuit in 2012. The dispute was ultimately settled before the court could issue a final judgment on the merits.

McSherry argued that in that case, litigation delays favored YouTube, which is owned by Google parent Alphabet Inc., because by the time the case went up to the Second Circuit, the initial public outrage caused by YouTube's technology had dissipated, and society had generally accepted the company's copyright practices and its impact on content creators' rights.

"Courts calm down [over time]," she said, adding that by the time the Viacom case was appealed, "The [Second Circuit] clerks loved YouTube."

Following the litigation discussion, a group of federal and state regulators gave a brief overview of new AI policies their agencies have been pushing, along with enforcement actions already being pursued against alleged law violators.

Federal Trade Commission attorney Benjamin Wiseman, who is the associate director of the FTC's Division of Privacy and Identity Protection, appeared remotely and told the audience that the FTC has recently given clear guidance that companies that claim their AI tools can do certain things need to have "substantiation."

Wiseman also pointed out that on Wednesday the FTC announced a flurry of recent enforcement actions aimed at cracking down on the use of AI to "supercharge" harmful and deceptive business practices, including a case targeting claims made about a service that promised to provide "the world's first robot lawyer."

Wiseman noted that the recent enforcement actions against AI companies are distinct from others, because in many of the cases the companies are required to destroy or delete consumer data, their AI models and information collected by those models.

"Turbo charging fraud with AI is no different than traditional fraud," he added.

Wiseman said the FTC is working with other regulators both domestically and internationally to address AI advancements and to ensure that regulators don't repeat past mistakes that have allowed for the "mass surveillance of Americans when they're online."

The discussion follows a sobering presentation that Dan Hendrycks, who is the executive director of the nonprofit Center for AI Safety, gave Wednesday on the first day of the Berkeley Law AI Institute on the risks AI poses to public infrastructure and society at large, explaining how AI tools could allow unsophisticated bad actors to quickly and easily develop powerful bioweapons and viruses that could cause global pandemics.

During his presentation, Hendrycks argued that there currently aren't adequate "tamper resistant" safeguards in place to address those risks, and he suggested that companies could impose a "reasonable care" legal standard in assessing models for such risks while obtaining informed consent from users. He also suggested that the U.S. government could regulate AI computer chips like enriched uranium, particularly since hardware can be easier to control than algorithms and data.

The event will continue Friday morning with Federal Communications Commission Chair Jessica Rosenworcel, who is expected to give a talk on transparency issues involving AI.

–Additional reporting by Allison Grande and Ivan Moreno. Editing by Michael Watanabe.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!