DOJ Gets Crash Course In AI As Attys Brace For Crackdown

(August 13, 2024, 7:45 PM EDT) -- The U.S. Department of Justice is working to keep pace with the swift rise of the tools known as artificial intelligence, investigating potential fraud as its Criminal Division learns the nuances of the technology — an unsettling dynamic for some defense lawyers.

So far, examples of criminal charges involving the use of AI are scant.

"But we could see a future where the DOJ uses AI to detect and investigate threats to the election process, combat misinformation and deepfakes, and triage cases involving various crimes, including national security and complex white collar offenses," Steven Lee, vice chair of the white collar defense practice at Lewis Brisbois Bisgaard & Smith LLP, told Law360.

At the moment though, the DOJ appears to be wading into shallower waters by investigating companies suspected of exaggerating the capabilities of their products, according to Joel M. Cohen, chair of White & Case LLP's white collar practice group, who represents several of the companies in question.

The U.S. Securities and Exchange Commission, which pioneered enforcement efforts aimed at what it calls "AI washing," also is involved in the probes, Cohen told Law360. He declined to name the targeted companies but said all use the technology in a variety of markets and industries and are the subject of DOJ criminal fraud or misstatement investigations.

Investigators are concerned with how the companies describe their tech and the extent to which employees are involved in the process, Cohen said. He asserted that the probes arise from what may be a misunderstanding within the DOJ of "human-in-the-loop," a common feature of AI that describes how people interact with and "train" the tech to improve it and ensure that it's working properly.

"It's not like the computer HAL in '2001: A Space Odyssey' that is literally doing everything," Cohen said.

He added, "We've been doing a lot of educating to try to get the DOJ and others to understand that it isn't necessarily misrepresentation. They just aren't, respectfully, understanding the nature and extent of the product."

But he said the DOJ has been "very receptive" and is "sopping up information" from industry players.

"I can't tell you how it's going to end up, but they're definitely listening, and we're happy about that," he said. "I imagine there are going to be a number of decisions on whether to bring cases by year's end, so we're going to start to see, to some extent, how responsive they are."

Enforcement interest in AI is a logical progression of the DOJ's focus on data analytics in recent years, which has included the 2022 hiring of former Anheuser-Busch InBev SA executive Matt Galvin as the first data expert in the Criminal Division's fraud section. At his previous job with the world's largest brewer, Galvin leveraged machine learning, a component of AI, to improve compliance systems.

The DOJ further beefed up its tech savvy in February when it tapped Jonathan Mayer as the agency's first chief AI officer, an advisory role. He previously was a tech adviser to then-Senator and now Vice President Kamala Harris from 2017 to 2018.

U.S. Attorney General Merrick Garland touted Mayer's appointment as "invaluable in ensuring that the entire Justice Department ... is prepared for both the challenges and opportunities that new technologies present."

Attempts to speak with Mayer and Galvin were unsuccessful.

"There's a lot of weariness in the DOJ that they waited too long on the cybersecurity and cryptocurrency fronts, and they don't want to make the same mistakes here on AI," said Beth George, a Silicon Valley-based partner at Freshfields Bruckhaus Deringer LLP and former U.S. Department of Defense acting general counsel.

"They were way behind on cybersecurity, watching U.S. intellectual property go overseas before they even had a plan," George told Law360. "They were somewhat less but still behind on crypto, not standing up a formal task force until nine years after the Treasury Department had established a group."

Deputy Attorney General Lisa Monaco has stressed in several speeches this year that the DOJ is serious about curbing the misuse of AI, which she has described as the "ultimate disruptive technology" and the "sharpest blade yet" on the double-edged sword that is new tech.

Monaco also announced in March that the DOJ will seek stiffer sentences when criminals use the tech to make a white-collar crime "significantly more serious," and the DOJ has proposed a specific AI-based sentencing enhancement in its most recent annual policy recommendations to the U.S. Sentencing Commission.

The enhancement would apply to defendants who used AI while preparing to commit a crime, committing a crime or while trying to avoid being arrested after committing a crime.

The proposal does not require the government to show that a special skill was involved in using AI to commit a crime, which reflects the DOJ's concerns about how it makes it easier to commit complex crimes, said Anna Gressel, New York-based litigation counsel at Paul Weiss Rifkind Wharton & Garrison LLP who focuses on AI and digital tech.

"We're looking at potentially a whole range of actors who can act in a sophisticated way, even though they don't have special skills," Gressel said.

Cohen said the enhancement proposal's language seemed "odd" to him as it didn't offer guidance on distinguishing AI's involvement in the crime. Instead, the proposed enhancement "suggests categorically that AI merits enhanced punishment, and that doesn't seem very logical to me," he said.

"It's heavy on signal and lighter on nuance," Cohen said.

--Additional reporting by Steven Lerner. Editing by Brian Baresch and Dave Trumbore.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!