Your firm must have an account to access this feature

Should The US Trust Corporations To Lead On AI Standards?

This article has been saved to your Favorites!
The mistakes made with the hands-off approach to Twitter and Facebook show that society needs to better regulate technology advances like artificial intelligence, but the structure needs to be flexible as AI rapidly changes, according to a group of legal and tech experts.   

The experts gathered at the University of California, Berkeley on Wednesday for a three-day program at the Berkeley Law Artificial Intelligence Institute. One panelist, Ilona Logvinova, associate general counsel at McKinsey & Co. and head of innovation for McKinsey Legal, said new governing structures are needed for how AI fits into all the emerging technologies like the metaverse and blockchain "to form a new way of how we live in this world."

In the absence of U.S. regulation, Logvinova praised what companies like Google and Microsoft are doing in setting up guardrails, ethical standards, governance systems and other best practices.

"I think we collectively recognize on a societal level that this technology is powerful, and it can do a lot of good, but it can also do a lot of bad," she said. "And so in the absence of having an authority to tell us that we need to do something, we're taking our own steps together."

She said in-house lawyers especially need to be more in touch in terms of applying risk and legal considerations by design.

"I think this is a really important takeaway for everyone in the room and for current and aspiring lawyers — we really need to get sharp on tech," Logvinova said.

She added that lawyers need to understand how business teams are working "and [know] some level of what they're doing because we can't just be at the end of the line advising on what they're putting out to market or what they're deploying internally. We need to be in the room from the beginning of what they're building."

Moderator Brandie Nonnecke, an associate research professor at Berkeley's Goldman School of Public Policy and co-faculty director of the Berkeley Center for Law and Technology, cautioned that while some companies are building good structures, society cannot be naive about their motivation "to potentially do assessments or transparency reports that paint them in a positive light."

When it comes to leading the way on AI, "can we really trust industry?" Nonnecke asked.

"I would be more comfortable answering yes to that question if it were not for the last 20 years [of a social media] 'shitshow,'" replied Hany Farid, a Berkeley professor with a joint appointment in electrical engineering and computer sciences and the School of Information. Farid has worked with legislators in the U.S. and Europe on social media and AI regulations.

"I think we tried to get out of the way and let technology boom, and now we have Elon Musk running Twitter [now X]. And Mark Zuckerberg running Facebook. And so, I don't think so," Farid said of trusting business.

He said society needs a carrot-and-stick approach if companies are going to take a responsible path on AI.

"Everybody's saying the right things," Farid added. "But I will tell you, while these companies are saying publicly [that they] believe in ethical deployment, they are lobbying Capitol Hill and Brussels and the U.K. and every other country not to pass legislation. So it's hard to take that seriously."

But Farid agreed with Logvinova that a middle ground should be sought and "debilitating regulation" should be avoided.

He said that what concerns him, however, is U.S. lawmakers' lack of expertise in this area.

"If you go to Capitol Hill and you meet our representatives," he said, "it doesn't give you a lot of confidence that we're going to move in the right direction. Not only do they not understand the technology, it's incredibly partisan, and they're being lobbied to oblivion."

Farid and others on the panel indicated that they are looking to Brussels, the home base of the European Union, to lead the way. "They're more sensible, more well-informed, more thoughtful about these issues," he said.

The panelists referred to this approach as the Brussels effect, where the EU has taken the lead on privacy regulations and now AI. Companies that want to do business globally, and especially in the large European marketplace, are closely watching and adapting to AI legal developments in the EU, they said.

The group also warned that changes in AI are now coming at a speed that has surprised even expert panelists who have been in the AI space for 10 years.

Peter Tsapatsaris is senior corporate counsel for AI legal at Salesforce, which is both a producer of and consumer of AI products. He talked about some of the steps that Salesforce has taken to keep up with the changes.

"We have an ethical abuse team that reviews already [made] products to make sure that we're not unintentionally causing harm," Tsapatsaris said. "And we have an AI acceptable use policy. It's very standard in the industry now to have an acceptable use policy which prohibits ... using these systems to generate sexual content or using them for deceptive activity."

Farid spoke of some harms in the deceptive use of AI, such as fake audio of a CEO making a pronouncement, or extortion plots using online fake, nonconsensual sexual images of women, or especially politicians and celebrities.

"And here's the big one, in a room full of lawyers — when you go into a court of law and you introduce evidence, how do you trust it anymore?" Farid asked. "Because anybody gets to say it's a deepfake," he said, referring to highly realistic but fabricated audio or video created with generative AI tools.

"And suddenly we have this very complicated problem because that audio recording from the police, that video from the CCTV, the image of somebody doing something illegal — suddenly we now have to prove [they are real]. And I think that is going to be very complex for the rules of evidence because we haven't really quite caught up."

Libby Perkins, general counsel of SignalFire, talked about the challenge for companies like hers, a $1.8 billion venture capital fund that operates as a technology company composed of data scientists, engineers and investors. She compared AI investment to the crypto world.

"It was in April, that's around the time that all the crypto hucksters had moved on to become AI hucksters, and that's the trend," Perkins said. "Our job was to cut through that."

For the most part, she explained, her firm will look for a technological advance that people will want to use and that can be built in a proprietary way. But they also must look at how a prospective company is acquiring and using customer data.

"We've learned from the social media [times] that we didn't do that particularly well from an ethical perspective. I do see my colleagues spending a lot more time thinking about [how] we kind of goofed up. ... I think we're trying to learn from that. There is definitely a responsibility among venture capitalists to pull back sometimes and not invest in something they shouldn't. But I understand it's hard. There's the competition and that push to invest."

--Editing by Marygrace Anderson.


For a reprint of this article, please contact reprints@law360.com.

×

Law360

Law360 Law360 UK Law360 Tax Authority Law360 Employment Authority Law360 Insurance Authority Law360 Real Estate Authority Law360 Healthcare Authority Law360 Bankruptcy Authority

Rankings

Social Impact Leaders Prestige Leaders Pulse Leaderboard Women in Law Report Law360 400 Diversity Snapshot Rising Stars Summer Associates

National Sections

Modern Lawyer Courts Daily Litigation In-House Mid-Law Legal Tech Small Law Insights

Regional Sections

California Pulse Connecticut Pulse DC Pulse Delaware Pulse Florida Pulse Georgia Pulse New Jersey Pulse New York Pulse Pennsylvania Pulse Texas Pulse

Site Menu

Subscribe Advanced Search About Contact