The New York City Bar hosted its Artificial Intelligence Institute on Monday with a panel discussing the generative text tools commonly called artificial intelligence, which produce text in response to prompts, and large language models, or LLMs, programs that power generative AI tools.
The panel was moderated by Wendy Butler Curtis, who co-chairs the LLM subcommittee of the bar's task force on digital technologies. Curtis, also the chief innovation officer at Orrick Herrington & Sutcliffe LLP, said the firm is reviewing which tools to build, buy or modify.
AI is not new to the legal field, as lawyers have used tools such as legal analytics for years, Julie Chapman, the vice president and head of legal in North America for LexisNexis, said during the panel.
Generative tools are relatively new but becoming increasingly common. Law360 Pulse has found that 35% of law firms are using them.
"This is the year to start trying it if you haven't already," Chapman said.
Several use cases for generative tools already exist. Chapman said legal professionals can use them in preparing for an oral argument in a lawsuit, drafting documents and conducting legal research.
While the tools are becoming more common, lawyers are trying to grasp new terms surrounding them.
Ali Vahdat, applied research manager at Thomson Reuters, said machine learning, which uses algorithms to detect patterns, is a subset of AI. Deep learning, which involves in-depth data processing and analytics, is a subset of machine learning. Generative AI, which can be trained, is a subset of deep learning.
Experts also discussed new terms about generative training.
Robert Mahari, a co-chair of the city bar's LLM subcommittee, said generative models typically go through pre-training, which involves crawling the web for information. After this step, Mahari said, the models get fine-tuned on specific information, such as legal, and then the models are given examples with prompts.
Chapman said firms should still rely on humans at every step in the process, because generative tools produce text that may look legitimate but can't be relied on for accuracy.
Some tools also use retrieval-augmented generation, or RAG, a process that updates the model with proprietary data to keep its training fresh and focused on the user's own context. Vahdat said RAG tries to "ground the responses."
A recent study suggested that RAG may not eliminate output from generative tools that falsely appears to have correct information, sometimes called "hallucination." The study also called for third-party benchmarking of legal AI tools.
Mahari said RAG works, but users have to be careful of the underlying databases that it uses. As for third-party benchmarking, Mahari said it might be challenging to test how people actually use these new tools.
--Editing by Brian Baresch.
Law360 is owned by LexisNexis Legal & Professional, a RELX Group company.
For a reprint of this article, please contact reprints@law360.com.