Law360 Pulse conducted a survey from mid-December 2023 through February 2024 asking lawyers for their take on using generative AI, a type of AI capable of creating content. About three-fifths of survey takers work at the partnership level and two-thirds were men. Nearly all were based in the United States.
Half of the 384 surveyed lawyers said questions about whether their firms encourage, offer training in, or warn against the use of generative AI did not apply to them.
A third said they'd been warned against the unauthorized use of the technology.
"We have to create guardrails for our people to use it," Ramos said. "We have to decide which platforms to use and how to use them."
Clarke Silverglate, which has about 10 attorneys, does not currently have a written policy for AI. However, the firm's leadership meets weekly to determine AI rules.
The firm's current unwritten policy restricts AI use to certain models and platforms decided by leadership. Clarke Silverglate is also in the process of developing written guidelines.
Ramos said these rules are largely shaped by the AI ethics guidelines approved by the Florida Bar in January, which stated that attorneys must protect the confidentiality of client information when using these tools and reviewing the output created by the platforms.
Without the right policies, generative AI tools can be a data protection liability because some AI models are trained on the data that it is fed, resulting in the potential to leak private data to the public.
Business law firm Gunster, also based in Florida, has a formal generative AI use policy that is in part driven by the recent Florida Bar ethics opinion, according to Michelle Six, of counsel with the firm.
Six said that the firm's guidelines, which predate the Florida Bar decision, were crafted to protect the confidentiality of client information by stipulating that the generative AI tools must come from approved vendors who follow strong data protection principles. Lawyers are forbidden to use generative AI that is open source, which refers to publicly available systems.
Florida isn't the only state bar to suggest ethics guidelines for using AI. In April, the New York State Bar Association recommended that firms protect attorney-client privilege when using AI.
"Where I think we probably see a tension is between the risks associated with the use of generative AI and the rewards," Six told Law360 Pulse.
Guidelines or not, some firms are jumping into the AI pool.
Nearly 16% of respondents to the Law360 survey belonged to firms that encouraged the use of AI, while 14% said their firms offered training on how to use AI writing and research tools.
About 20% of respondents are using generative AI for legal research and document creation.
Most survey respondents, however, are not currently using generative AI.
"To ignore the ways that these tools can help us evolve as attorneys and be better advocates for our clients, I think, is not in keeping with what the technology is doing," Six said. "We can't be so scared of them that we think they're going to replace human beings."
Six said that Gunster is currently testing different generative AI tools. While Six could not disclose the tools, she said that the firm is excited about the products.
Clarke Silverglate is testing the generative AI functionality from LexisNexis and Westlaw. Ramos said that both tools are works in progress, but at least the firm is current with the latest tools.
AI Policy Creation Tips
Whether or not a firm has explored generative AI, experts say that now is the time to create an AI policy.
McAngus Goudelock and Courie LLC, a law firm with over 300 attorneys, enacted an internal AI policy, according to attorney Jeffrey Cunningham. To create its AI policy, the law firm received feedback from numerous stakeholders, including clients and attorneys, and considered available bar opinions from multiple jurisdictions.
Cunningham told Law360 Pulse that the key elements of its AI use policy include a clear definition of AI, ethics and professional conduct concerns, a notice to clients about the use of AI, a requirement of supervision when using AI tools, an analysis of typical AI uses, data protection and confidentiality rules, and education about AI tools.
Six said that firms should ensure that their AI policy is bespoke for their own organization's needs, its security obligations and the types of tools in place.
"There is no one way to have an AI policy," Six said. "Everybody's systems are so different and the way people share their data is different."
Above all, protecting client data should be the foundation of a firm's AI policy, according to Six. Firms working with external AI vendors should also ensure that those providers follow the same data protection standards.
However, the AI policy should not be etched in stone, experts said. It should be an organic document that grows and develops as AI does the same.
Six said that firms should be extremely cautious with AI now but stay nimble enough to adapt to changes. It is possible that a future generation of AI tools might address these confidentiality concerns while expanding upon functions.
"As our understanding of these models changes over time, I also expect our policies to change over time," Six said. "The policy that is right for us right now may not be the right policy five years from now."
--Editing by Pamela Wilkinson, John Campbell, Rachel Reimer and Jack Collens. Graphic by Jason Mallory.
Note: Law360 is owned by LexisNexis Legal & Professional, a RELX company.
For a reprint of this article, please contact reprints@law360.com.