Members of the House Financial Services Committee sought to get their arms around the potential risks and rewards of generative AI for finance and housing opportunities as they probed industry witnesses on whether there are existing regulatory gaps that Congress may need to fill.
The lawmakers' concerns centered on certain models' potential for bias against marginalized communities and generative AI's ability to fake digital identities, but on the whole, both lawmakers and witnesses called for a cautious approach to a technology whose use cases are still emerging.
"We should be leery of rushing legislation," said Committee Chair Rep. Patrick McHenry, R.-N.C. "It's far better we get this right, rather than be first. In other words, policymakers should measure twice and cut once."
American financial regulations are traditionally technology neutral, said McHenry, which means firms offering financial or housing services using the technology in their processes are still subject to existing consumer protection laws. However, the committee intends to continue to consider whether "targeted legislation" is necessary to address any specific gaps related to AI, he said.
That could be as simple as formally clarifying that AI use will be held to the same standards that humans are subject to, said Vijay Karunamurthy, chief technology officer at Scale AI, which provides training data for firms developing AI applications.
But models are only as good as the data they're trained on, Karunamurthy said, which highlights the importance of rigorous testing by the firms employing them. Models trained on limited or biased data can produce biased outcomes, which has in the past led to minority groups receiving higher rates for housing opportunities or being unfairly denied access to services based on a faulty risk profile, said Lisa Rice, president and CEO of the National Fair Housing Alliance. AI presents the next frontier of civil rights issues, she said.
Democrat lawmakers probed the witnesses on the risk of discrimination in models used for underwriting or decision-making use cases and touted the importance of having guardrails to ensure fairness.
"While AI has the potential to expand access to financial products, we need to ensure that AI models are explainable, transparent and do not lead to bias," said Rep. Stephen Lynch, D-Mass.
While AI holds great promise for equity by lowering costs that enable wider access to financial services, it can also yield inaccurate decisions that consumers don't understand and therefore can't challenge, said Rice. Regulations that allow for public access to the data that determines the fate of their housing or loan applications are key to rooting out potential biases, she said. Firms should also be incentivized to test for discriminatory outcomes and adjust accordingly.
But at its best, the technology allows smaller credit unions to compete with larger financial institutions, helps exchanges better detect potential illicit activity and can democratize markets by providing lower-cost products to main street investors, witnesses said.
To preserve these benefits, any regulation must be designed to allow rapid innovation, said Frederick Reynolds, a former deputy director of FinCEN and current deputy general counsel to the regulatory legal and chief compliance officer at FIS Global.
At minimum, any federal approach must create a "cohesive regulatory framework" that allows global firms like FIS to comply across states and worldwide without compliance disparities. While the current U.S. regime is sufficient to deal with AI "at this point," said Reynolds, any future legislation should continue to be technology-neutral.
"I think the challenge is with AI and generative AI, different use cases will demand very different regulatory solutions, and so I think a one-size-fits-all will be very difficult for AI," Reynolds said.
Likewise, lawmakers should legislate for "the risks and opportunities of the use case" rather than the technology itself, said John Zecca, Nasdaq's executive vice president and global chief legal, risk and regulatory officer. Zecca agreed that existing financial rules already capture artificial intelligence, and the government has the necessary tools for the current use cases.
However, gaps will develop over time, he said, and that's where agencies will have to take action. Regulators should examine potential risks to market stability, the use of deep fakes and how to create certainty that humans are behind digital identities.
Digital identification will have to become more sophisticated as generative AI models are more easily able to generate the information to create phony accounts and deep fakes, witnesses agreed. Though discussion about solutions is at an early stage, Rep. Bill Foster, D-Ill., floated that perhaps digital ID issuance is "an essential government role."
However, adoption of government-issued online identification appears to be slow, according to Elizabeth Osborne, the chief operations officer for Great Lakes Credit Union. Still, any solution will require firms to rethink how they identify customers beyond easily faked information like date of birth, witnesses said.
The hearing builds on an AI report released by the committee last week, which McHenry called the "blueprint" for future action on AI use in financial services. The report consolidated views from stakeholders and regulators at a series of roundtable discussions with lawmakers and concluded that the committee will focus on ensuring the enforcement of existing laws as they relate to AI.
--Editing by Lakshna Mehta.
For a reprint of this article, please contact reprints@law360.com.