![]() |
Michele Carney |
![]() |
Marty Robles-Avila |
In the blink of an all-seeing eye, we have transitioned from omnipresent physical and electronic surveillance to the digital chains of algorithms concealed in our devices. As lawyers, we stand at the forefront of the gold rush of generative AI, a type of AI powered by large language models, or LLMs, that leverages deep learning techniques, particularly neural networks, to predict and generate text and other media based on their training.
Although it seemed unimaginable several months ago, ChatGPT Gov[2] may soon be authorized to assist federal agencies in making decisions of constitutional significance.[3] The inevitable expansion of AI within the government to use and track sensitive data not only highlights the power of GAI, but also raises serious due process concerns, underscoring the urgent need for robust policies to ensure its ethical and effective use.
As GAI is increasingly adopted in the legal world, there are three considerations lawyers and their firms should be paying attention to.
1. Prompting skills will (probably) always matter.
For lawyers, LLMs are used to accomplish tasks such as legal research and analysis at lightning speed; document drafting and review; and client support and interaction.
Lawyers have become adept at prompting chatbots to obtain better results, but even as newer models continue to improve, developing and utilizing strong prompting skills remains crucial, as effective prompts unlock the full potential of most current GAI models,[4] ensuring more accurate, relevant and creative outputs.
LLM leaderboards compare and contrast dozens of models[5] as providers release new versions with improved performance and capabilities for more specific use cases.
When using GAI for a case, specificity is critical. Instead of broad questions, frame queries with detailed parameters and context.
For instance, replace "What are the requirements for an H-1B visa?" with the following: "I am an expert immigration attorney. I have a sophisticated employer client. Provide the current U.S. Citizenship and Immigration Services requirements for an H-1B visa application for a software engineer from India, including education, salary and employer obligations, effective as of February 2025."
This detail guides the GAI to focus on relevant information, reducing the likelihood of outdated responses. It also assumes a more sophisticated user.
Second, structure your prompts in steps — as if climbing a ladder. Asking questions ad seriatim, known as iterative prompting, may lead to a more precise response. Meta prompts, advanced instructions that include details such as persona, guidelines and contextual frameworks, also help optimize output.
Next, direct your chatbot to cite only authoritative sources. AI can pick up a bunch of junk — such as information from random websites or blogs. Instruct the AI to prioritize information from reputable legal databases. For example: "Using only information from Board of Immigration Appeals cases, analyze the current interpretation of Guatemalan women unable to leave an abusive relationship as a 'particular social group' in asylum cases."
This approach ensures the chatbot's responses are grounded in authoritative and current legal interpretations. However, never completely trust the output and always fact-check. Don't be these guys (there's a new one every month, it seems).[6]
The chatbot may find reliable cases and give a citation, but it may not give the actual holding of the case. There may be dicta or concurrences that contradict the holding. AI may grab this language and cite it as the outcome of the case.
Thus, even if relying on established sources, the lawyer must always go back to the source case to review it and analyze it themselves. You can't outsource your ethical obligations, and you can't rely on a chatbot to get it right.
2. Understanding tech evolutions ahead is essential.
Chatbots have entered their Stone Age in that, like hominids, they are learning to develop and use tools.
Two lateral evolutions promise to dramatically affect the future of legal work: autonomous agents and so-called reasoners. Reasoners, like OpenAI's ChatGPT o1, o3-mini, and o3-mini-high models, or DeepSeek R-1,[7] are introspective LLMs that incorporate chain-of-thought prompting by "thinking" or reflecting before providing an answer. This allows the user to follow the model's thought pattern as it is coming up with the answer.
For example, when entering a prompt into OpenAI's o3-mini-high, the model will "show" you that it is reasoning, analyzing and explaining, prior to officially giving you a response. For example, we asked o3-mini-high to explain how constitutional law acts as a straitjacket for administrative law, and it reasoned for 14 seconds. In response, it explained the separation of powers and the nondelegation doctrine, protection of individual rights via due process and judicial review, and structural constraints, like appointment and removal of agency officials.
AI agents can be described as AI that can be given goals they autonomously pursue. Many providers, such as Microsoft, now offer some version of them. Google's Gemini and OpenAI both have versions called Deep Research, which combine autonomous agents with multistep reasoning capabilities.[8]
As leading AI thought leader Ethan Mollick explains, "Reasoners provide intellectual horsepower, while the agentic systems provide the ability to act" and general-purpose agents will soon expand "beyond narrow tasks to become autonomous digital workers that can navigate the web, process information across all modalities, and take meaningful action in the world."[9]
Already, we're witnessing the inextricable embedding of LLMs into the Microsoft 365 and Google Workspace ecosystems — not just to improve grammar and summarize meetings — but as a collaborative knowledge agent synthesizing your work.
Future legal tech solutions will seamlessly integrate multiple LLMs — with distinct proficiencies[10] — accessed by more intuitive, enhanced user interfaces, diminishing the need for users to have advanced prompting skills, or perhaps the need to understand the underlying technology.
But we're not there yet.
3. Law firm policies and training should focus on ethical use.
Technologies will change, but our ethical obligations are unwavering.
As the District of Columbia Bar noted in an ethics opinion in April, "[w]hat technology has not done is alter lawyers' fundamental ethical obligations."[11]
Any firm policy addressing the use of GAI should focus on protecting client confidentiality, including client data in prompts (Rules 1.4 and 1.6 of the American Bar Association's Model Rules of Professional Conduct) and spotlighting the importance of competence with respect to generated content (Model Rules 1.1, 5.1, and 5.3).[12] The latter notion emphasizes the nonnegotiable obligation by an attorney that any and all output generated by an AI tool has been verified for accuracy, relevance, quality and compliance with ethical standards before being used in client work or filed with a court or agency.
Any legitimate GAI solution at this evolutionary stage should already employ end-to-end encryption, have substantial security procedures in place to protect data, and never train its LLM with client data, the floor for legal use cases.
Let's start with confidentiality. It's almost cliché to say, but a lawyer "shall not" reveal any information relating to the representation of a client unless there is informed consent, disclosure is "impliedly authorized" to carry out the representation, or another exception applies.[13] Just as importantly for any GAI policy, Model Rule 1.6(c) also makes mandatory the practice of assuring "reasonable efforts" are made to "prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client."[14]
But what are "reasonable efforts" and how do you accomplish them with a technology described as a black box containing billions of parameters or settings that have been fine-tuned (trained) on vast amounts of (purloined)[15] data from the internet that generates content based on conditional probabilities that predict the next word, or part of a word, called a token?
Comment 18 to Model Rule 1.6 lays out some factors that play into the reasonableness inquiry, which include the difficulty and cost of establishing safeguards, the sensitivity of the data and the extent to which those safeguards adversely affect the lawyer's ability to represent clients.[16] But — and this is a critical part of the rule — the actual unauthorized access or inadvertent or unauthorized disclosure of client confidential information "does not constitute a violation" of the rule if the lawyer has made reasonable efforts to prevent the access or disclosure.
However, lawyers must assess the likelihood of such disclosure before using AI tools.
In addition to ethical constraints, lawyers must also comply with their client's requirements, should they require special security measures that are not required by the rule, as well as other laws governing data privacy. Lawyers must disclose GAI use if the client "expressly requires disclosure under the terms of the engagement agreement or the client's outside counsel guidelines."[17]
When a lawyer uses a third-party GAI product, they must do so competently and understand "the benefits and risks associated with relevant technology."[18] Again, the rule remains the same, irrespective of the technology. We've seen this movie before with respect to policies surrounding the use of email, cloud computing, metadata and remote employment.
When new technologies emerge, we tend to make sense of them through the prism of the old thing.
Lawyers must understand GAI well enough to know the basics of the technology that they use. If a lawyer does not have the time to learn the technology used in their office, they should make sure they have a staff member who understands and can train others within the firm. The lawyer or their designated staff person should be aware that GAI is capable of self-learning through a process known as reinforcement learning from human feedback.
When querying, say, ChatGPT or Perplexity AI, the chatbot may be learning from the input, which could include personally identifiable information or confidential data.[19] Free public versions of any GAI product should be off-limits for any use involving confidential information.[20]
In addition, using incognito mode does not provide sufficient protection for lawyers when using GAI with client information. Incognito mode primarily prevents local storage of browsing history and cookies, but it does not secure data transmitted to or stored by the AI service.
So, unless your client has given you permission to use their confidential information in a GAI product, notwithstanding the risks, there is potential for a violation of either Rule 1.6(a), or your client's acceptable use policy. The former is an ethical breach, while the latter a breach of contract that may cause you to lose a valuable client.
Provided you're under no acceptable use policy mandate from a client, you can in theory safely input confidential client information to a GAI product once you've reviewed and verified the provider's terms of use,[21] privacy policy[22] and data processing agreement.[23] This should be a lawyer's prime directive, as these documents collectively lay out the critical information needed to safely and ethically use GAI in the practice of law.
The State Bar of California offered guidance on the use of GAI, explaining that prior to inputting any confidential client information, a lawyer must ensure the AI system "adheres to stringent security, confidentiality, and data retention protocols."[24] And while perhaps not obligatory in every instance, a lawyer may wherever possible elect as a general best practice to anonymize client information and avoid entering details that might be used to identify the client.[25]
Data breaches already keep lawyers up at night; as technologies advance and more firms adopt GAI tools, data privacy and security become even more crucial.[26]
And the final and arguably most important leg of any firm GAI policy is making sure that employees, including lawyers, are properly trained on how to use the tool. This goes to a lawyer's supervisory responsibilities, whether that be the managerial authority of a partner over the law firm, or senior lawyers supervising associates, as contemplated by Model Rule 5.1 — or lawyers supervising nonlawyer paraprofessionals, including chatbots or agents.[27]
Conclusion
Although we find ourselves in the midst of challenging times, we should celebrate the beauty and creativity these unprecedented moments bring. The possibilities — and peril — that cognitive chatbots and autonomous agents bring to the legal industry truly are revolutionary.
But though the best LLM for a particularized use case will continue to fluctuate based on events beyond our control, our ethical obligations remain steadfast, a reassuring constant.
Michele Carney is a partner at Carney & Marchi PS and chair of the AILA Innovation & Technology Committee.
Marty Robles-Avila is senior counsel at Berry Appleman & Leiden LLP and a member of AILA's Innovation and Technology Committee.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
[1] Distrust That Particular Flavor - By William Gibson - Book Review - The New York Times (Jan. 13, 2012) ("'The future is already here. It's just not evenly distributed yet' — this quote is often attributed to Gibson, though no one seems to be able to pin down when or if he actually said it.").
[2] Introducing ChatGPT Gov | OpenAI ("Agencies can deploy ChatGPT Gov in their own Microsoft Azure commercial cloud or Azure Government cloud on top of Microsoft's Azure's OpenAI(opens in a new window)Service. Self-hosting ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance requirements, such as stringent cybersecurity frameworks (IL5, CJIS, ITAR, FedRAMP High).").
[3] Elon Musk's DOGE is feeding sensitive federal data into AI to target cuts, Elon Musk's DOGE is feeding sensitive federal data into AI to target cuts (Feb. 6, 2025) ("The DOGE team is using AI software accessed through Microsoft's cloud computing service Azure to pore over every dollar of money the department disburses, from contracts to grants to work trip expenses, one of the people said…Microsoft Azure can be used to access AI tools made by many different companies, and it is unclear which the DOGE workers used.").
[4] We refer to "most" because not all models are built the same and obtaining the best results from them may require different techniques. Reasoning best practices - OpenAI API (explaining best practices for using and prompting between OpenAI's so-called reasoning models (o1 and o3-mini, for example) and GPT models (like GPT-4o); because the former models perform reasoning internally, they do not need to be told to "think step by step"). But when dealing with complex ambiguous problems in the fields of science, engineering, and legal services, utilizing your domain expertise to provide context and nuance makes all the difference.
[5] LLM Leaderboard 2025 - Verified AI Rankings (lists over 30 models, proprietary and open weight, from a multitude of providers, allowing you to compare one model against another).
[6] David Lat, A Major Law Firm's ChatGPT Fail: Why is not citing fake cases so hard? A Major Law Firm's ChatGPT Fail - by David Lat (Feb. 7, 2025).
[7] Lawyers should proceed with caution when using DeepSeek. According to its Privacy Policy, data is stored in the People's Republic of China. DeepSeek Privacy Policy. Also see: International regulators probe how DeepSeek is using data. Is the app safe to use? : NPR.
[8] Introducing deep research | OpenAI ("An agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you."); Gemini: Try Deep Research and Gemini 2.0 Flash Experimental ("Deep Research does the hard work for you. After you enter your question, it creates a multi-step research plan for you to either revise or approve. Once you approve, it begins deeply analyzing relevant information from across the web on your behalf. Over the course of a few minutes, Gemini continuously refines its analysis, browsing the web the way you do: searching, finding interesting pieces of information and then starting a new search based on what it's learned. It repeats this process multiple times and, once complete, generates a comprehensive report of the key findings, which you can export into a Google Doc. It's neatly organized with links to the original sources, connecting you to relevant websites and businesses or organizations you might not have found otherwise so you can easily dive deeper to learn more."). OpenAI's offering is currently priced at $200 per month, while Gemini is $20.
[9] Ethan Mollick, The End of Search, the Beginning of Research, The End of Search, The Beginning of Research (Feb. 3, 2025).
[10] With the spotlight on DeepSeek's R1 reasoning model, much attention has focused on what's referred to as MoE, or mixture-of-experts, LLMs that use multiple specialized subnetworks, colloquially described as "experts" to process different types of prompts, reducing the cost of computation. What is mixture of experts? | IBM ("Mixture of Experts architectures enable large-scale models, even those comprising many billions of parameters, to greatly reduce computation costs during pre-training and achieve faster performance during inference time. Broadly speaking, it achieves this efficiency through selectively activating only the specific experts needed for a given task, rather than activating the entire neural network for every task.").
[11] Ethics Opinion 388: Attorneys' Use of Generative Artificial Intelligence in Client Matters, DC Bar - Ethics Opinion 388 (April 2024) ("Due to the rapid development of technology in this area, we recognize that some of the concerns raised in this opinion may be resolved or mooted for particular products in the future, perhaps even in the near future.").
[12] There are, of course, other Model Rules implicated in the use of GAI; for example, Model Rule 1.5, which governs lawyers' fees and expenses, would apply to representation where GAI is used. We leave this question for another day, focusing instead on policies affecting its use.
[13] ABA Model Rule of Professional Conduct 1.6(a); 1.6 Comment 3.
[14] Model Rule 1.6(c).
[15] "Torrenting from a corporate laptop doesn't feel right": Meta emails unsealed, Ars Technica (Feb. 6, 2025), "Torrenting from a corporate laptop doesn't feel right": Meta emails unsealed - Ars Technica ("Newly unsealed emails allegedly provide the 'most damning evidence' yet against Meta in a copyright case raised by book authors alleging that Meta illegally trained its AI models on pirated books.").
[16] Model Rule 1.6(c), Comment 18.
[17] ABA Formal Opinion 512, Generative Artificial Intelligence Tools (July 29, 2024); see also Model Rule 1.4(a)(1) and (4) (requiring a lawyer to "promptly comply with reasonable requests for information" and of any "decision or circumstance with respect to which" informed consent is required by the rules, such as to a proposed course of conduct).
[18] Model Rule 1.1, Comment 8.
[19] Florida Bar Ethics Opinion 24-1 (Jan. 19, 2024), Opinion 24-1 – The Florida Bar, rightly notes that confidentiality concerns "may be mitigated by use of an inhouse generative AI rather than an outside generative AI where the data is hosted and stored by a third-party". The development and implementation of an in-house GAI product is outside of the scope of this article.
[20] Ethics Opinion 388: Attorneys' Use of Generative Artificial Intelligence in Client Matters ("Most technology companies that provide these [free] services make no secret of what they will do with any information submitted to them in connection with their publicly usable services: from their perspective, user inputs are theirs to use and share as they see fit.").
[21] The Terms of Use (or Service) typically include permitted uses for the AI product; treatment and ownership of user-generated content; limitations on liability; and the legal framework and jurisdiction for resolving disputes.
[22] The Privacy Policy details how the AI provider collects, uses, and stores user data, including the types of data collected, with whom it is shared, and security measures taken to protect it, such as data encryption at rest (where it is stored), in transit (when transmitted over networks) and SOC-2 compliance.
[23] A Data Processing Agreement or Addendum specifies how data is processed, especially in compliance with data protection regimes like GDPR and CCPA. This document also apportions the roles and responsibilities of data controllers and processors, security measures, and procedures for data breach notification. These agreements should be reviewed not only for the specific GAI product being contemplated, but also the terms for the cloud-based platform on which the LLM will be accessed via API (Application Programming Interface), whether that be Microsoft Azure (like the federal government, apparently), Google Cloud Platform, or Amazon Web Services.
[24] The State Bar of California Standing Committee on Professional Responsibility and Conduct: Practical Guidance For the Use of Generative Artificial Intelligence in the Practice of Law, Practical Artificial Intelligence in the Practice of Law (Nov. 16, 2023) ("A lawyer who intends to use confidential information in a generative AI product should [review the product's Terms of Use and] ensure that the provider does not share inputted information with third parties or utilize the information for its own use in any manner, including to train or improve its product.").
[25] State Bar of California Practical Guidance For the Use of Generative Artificial Intelligence, on Duty of Confidentiality.
[26] Legal Ethics Matters To Watch in 2025, Emma Cueto, Law360, https://www.law360.com/articles/2275830/legal-ethics-matters-to-watch-in-2025 (Jan. 1, 2025) (quoting Hilary Gerzhoy, Vice President, Rules of Professional Conduct Committee for the D.C. Bar: "'Firms are a huge target for cyberattacks; they have all sorts of sensitive information,' she said. 'I'm seeing renewed concern about securing client data, and when you're thinking about inputting information into these [AI] tools, that adds a new layer.'").
[27] ABA Model Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance) (in 2012, the Rule was modified to refer to the non-countable noun, "assistance" in lieu of "assistants").
For a reprint of this article, please contact reprints@law360.com.