arrow_back Back to Blog

ABA Formal Opinion 512

On July 29th, the American Bar Association’s Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, covering Generative Artificial Intelligence Tools. The Opinion says a lot, and we could do a long post discussing our feelings about it. Instead, this post tries to help clarify the most problematic and impactful section:

Self-learning GAI tools into which lawyers input information relating to the representation, by their very nature, raise the risk that information relating to one client’s representation may be disclosed improperly, even if the tool is used exclusively by lawyers at the same firm. … Accordingly, because many of today’s self-learning GAI tools are designed so that their output could lead directly or indirectly to the disclosure of information relating to the representation of a client, a client’s informed consent is required prior to inputting information relating to the representation into such a GAI tool.

… To obtain informed consent when using a GAI tool, merely adding general, boiler-plate provisions to engagement letters purporting to authorize the lawyer to use GAI is not sufficient.

On a quick read, this seems both scary and somewhat unclear. This piece will cover:

Before we get to any of that, let’s tell why we are worth listening to.

Why Are We Credible On This Topic?

If you don’t know us, here’s some background.

Adam holds a PhD in computer science from the University of Waterloo, where he worked with eDiscovery experts Dr. Gordon V. Cormack and Dr. Maura R. Grossman. He played a key role building Kira’s ML engine, and today leads Zuva’s research, development, and product teams.

Noah has been in the contract analysis AI market since its early days. He co-founded Kira Systems in 2011 and (as Kira’s CEO until its sale to Litera in 2021) helped build it into a dominant contract analysis AI vendor. Today, he’s the CEO of Zuva. He is co-author of the WSJ-bestselling book “AI for Lawyers.” Previously, he was a corporate lawyer at Weil. He happens to have won a Legal Ethics prize back in law school.

What Opinion 512 Says

Opinion 512 says lots. If you would rather not read the opinion in full, here’s a nice summary by Bob Ambrogi. We think many lawyers will find its “Confidentiality” section problematic, and we have quoted the most relevant parts below (including the really important bits we included in the intro above).

As a baseline, lawyers have a duty to keep their client’s information confidential. Generative AI raises risks that lawyers will inadvertently breach their client confidentiality obligations.

Before lawyers input information relating to the representation of a client into a GAI tool, they must evaluate the risks that the information will be disclosed to or accessed by others outside the firm. Lawyers must also evaluate the risk that the information will be disclosed to or accessed by others inside the firm who will not adequately protect the information from improper disclosure or use because, for example, they are unaware of the source of the information and that it originated with a client of the firm. Because GAI tools now available differ in their ability to ensure that information relating to the representation is protected from impermissible disclosure and access, this risk analysis will be fact-driven and depend on the client, the matter, the task, and the GAI tool used to perform it.

Self-learning GAI tools into which lawyers input information relating to the representation, by their very nature, raise the risk that information relating to one client’s representation may be disclosed improperly, even if the tool is used exclusively by lawyers at the same firm. This can occur when information relating to one client’s representation is input into the tool, then later revealed in response to prompts by lawyers working on other matters, who then share that output with other clients, file it with the court, or otherwise disclose it. … Accordingly, because many of today’s self-learning GAI tools are designed so that their output could lead directly or indirectly to the disclosure of information relating to the representation of a client, a client’s informed consent is required prior to inputting information relating to the representation into such a GAI tool.

When consent is required, it must be informed. For the consent to be informed, the client must have the lawyer’s best judgment about why the GAI tool is being used, the extent of and specific information about the risk, including particulars about the kinds of client information that will be disclosed, the ways in which others might use the information against the client’s interests, and a clear explanation of the GAI tool’s benefits to the representation. … To obtain informed consent when using a GAI tool, merely adding general, boiler-plate provisions to engagement letters purporting to authorize the lawyer to use GAI is not sufficient.

Because of the uncertainty surrounding GAI tools’ ability to protect such information and the uncertainty about what happens to information both at input and output, it will be difficult to evaluate the risk that information relating to the representation will either be disclosed to or accessed by others inside the firm to whom it should not be disclosed as well as others outside the firm.

Who Does Opinion 512 Apply To?

Opinion 512 is a US legal ethics opinion, so it only applies to lawyers practicing in the United States.¹ Opinion 512 is focused on obligations of outside counsel (i.e., law firms). It tries to reduce “risks that the information will be disclosed to or accessed by others outside the firm.” In-house lawyers are the “client” here, and should be able to decide for themselves what is appropriate for their organization.

Of course, ABA committee opinions are not law, so US lawyers don’t necessarily have to follow them at all. Many lawyers will, however.

What Tech Does Opinion 512 Apply To?

The confidentiality section of Opinion 512 discussed above applies to:

  1. self learning generative AI tools,
  2. used by law firms,
  3. where information related to client representations is placed into them.

Assuming you’re at a law firm, to evaluate whether or not Opinion 512 applies to a tool in question, you need to determine:

1. Is the AI tool a generative AI tool?

  • Not all AI is generative AI. Non-generative AI machine learning remains heavily used in legaltech, including in eDiscovery and contract analysis. Rules-based systems are also used (e.g., expert systems, simple document generation tools).
  • To find out, ask the vendor how their tool works, and if it incorporates generative AI. This requires understanding the tool’s architecture and functionalities, specifically whether it stores, learns from, or otherwise retains data inputs. Lawyers should work closely with IT professionals or external technology consultants to accurately determine these capabilities. Strictly, under Opinion 512, none of this matters if the application isn’t generative AI. But, even so, we think it’s good practice to know this either way.
    • Note that generative AI tools are so popular at the moment that many vendors have tried to figure out ways to incorporate generative AI into their applications, sometimes in non-essential ways. It may be possible to leave some of this generative AI functionality “off”, meaning that the tool would not be a generative AI tool for purposes of this analysis. Note that vendors also sometimes offer an option for customers to connect their own generative AI model with a given system. For example, Kira (a Litera company that Adam and Noah worked at before its sale) allows its customers to connect their own Microsoft Azure OpenAI key to Kira to enable a generative AI feature called “smart summaries.” It’s a somewhat open question how this works under Opinion 512. Presumably, the firm would have already done an Opinion 512 analysis around the AI model, but should also analyze whether integrating generative AI into an otherwise-not generative AI system raises Opinion 512 issues.

2. If it is a generative AI tool, is it a self-learning generative AI tool?

  • If the tool is self-learning, it takes data from previous users to improve performance for future users. Some vendors explicitly build this functionality into their systems. For example, OpenAI says:

One of the most useful and promising features of AI models is that they can improve over time. … When you share your content with us, it helps our models become more accurate and better at solving your specific problems and it also helps improve their general capabilities and safety. We don’t use your content to market our services or create advertising profiles of you—we use it to make our models more helpful. ChatGPT, for instance, improves by further training on the conversations people have with it, unless you opt out.

  • Many vendors allow users to opt-out of self-learning (as in the previous example). For example, ChatGPT is opt-in to self-learning by default, but can now be switched to opt-out. And OpenAI says they do not use content from their business offerings such as ChatGPT Team, ChatGPT Enterprise, and their API Platform to train their models. Most (all?) vendor systems built using OpenAI tech (including via Microsoft Azure²) will be built using OpenAI’s APIs, which are opt-out. Note that while the API endpoints might be the same between different versions of a similar generative model, there’s no guarantee that the data is used in the same way (e.g., you control the data in your own Azure instance rather than data being not necessarily segregated in OpenAI).
  • Note that it’s possible for generative AI tools to learn implicitly and explicitly. “Implicit” is like in the ChatGPT case, where the model learns behind the scenes, without the user intentionally triggering the training. “Explicit” is the intentional training of a generative AI system, e.g., fine-tuning using the OpenAI API. Both have similar risks, but explicit learning is in the “user’s” control, so users should be more aware of what is going into the model’s training data.
  • Not all generative AI tools self-learn. For example, Zuva has LLM-based “Answers” functionality, which uses generative AI to select from a set of predetermined answer choices (e.g., “Can the agreement be assigned? a) Freely assignable; b) Assignable with notice; c) Assignable with consent; …”). This Answers technology uses generative AI, but it does not self-learn.
  • To figure this item out, ask your vendor how their technology works, and if it is self-learning. Again, here you should make sure to get help from people with IT backgrounds. It is also very important to look the vendor’s contract (e.g., the OpenAI/Azure license terms), plus the vendor’s ability to deliver on their contractual obligations (e.g., their security, whether you trust them to be able to execute on what they say they do on security).
  • Evaluate what you might possibly use the tool for.
  • It is possible to use generative AI in the course of legal representation without placing client confidential information into it. For example, it might be helpful to have a memo on a non-specific point of law (e.g., “what counts as a sale of ‘all or substantially all’ assets under Arizona law?”) in the course of a representation. This question would not itself give away any client confidential information.
  • It may be possible to use a generative AI tool in ways that client confidential information could go into them, but wouldn’t necessarily have to. This is potentially risky. If a law firm takes the position that they do not need to disclose use of a self-learning generative AI tool to their clients because client confidential data will not go into it, they need to be very sure that their users don’t mess up and put client confidential data into the tool.
    • If taking the approach of not allowing client confidential data into your generative AI tool, make sure lawyers and staff with access to the tool are appropriately trained in what can go into the tool. Ideally, your vendors providing generative AI products not intended for client confidential data will offer controls in their tools to prevent client confidential data from going in. The alternative is to set up some sort of procedure to ensure that those with access are following the rules, including—ideally—auditing compliance (which is dependent on your not-for-use-on-client-confidential-work-generative-AI vendor providing audit tools).
  • In contrast, if conducting prompt engineering, especially if those prompts are retained for use on other matters, it is possible for prompts to inadvertently leak client confidential information by trying to further refine prompts (e.g., forcing generation to be on particular aspect with case specific details) or provide examples of (un)desirable language.
    • While the prompts themselves may or may not be used to refine the underlying generative AI, you are still providing client confidential information to the tool and possibly others depending on how prompts are stored and shared.

Where should Opinion 512 be applied?

By its terms, Opinion 512 should be applied to

  1. self learning generative AI tools,
  2. used by law firms,
  3. where information related to client representations is placed into them.

Opinion 512 focuses exclusively on generative AI, but all sorts of legal tech systems (including “non-generative” ones) can leak information. A more consistent approach might have been to first set broad rules for any technology that allows information to be leaked, and then apply this framework to generative AI systems (as an example of how to apply the framework). Instead, we have one framework for generative AI, and—maybe—another for all other technology.

For example, why is self-learning AI a problem where, say, using a previously drafted document as a precedent wouldn’t be? Most firms run their document management system (“DM”) with default-open / opt-out permissions. If a lawyer would like to find old master services agreements to use as precedent, they can freely search their firm’s DM, find agreements developed for other clients, and incorporate them into their new work product. (They might also use an explicit firm template agreement as precedent.) Of course, some documents/matters will be very confidential (or have a client unwilling to have their work product shared inside the firm³) and only specific lawyers will have opt-in access to these documents. Also, sometimes an ethical wall will be put in place, which will impact the ability of some firm lawyers to access certain documents. But, outside these situations, default-open using of precedent documents is very much the norm in Biglaw. Isn’t the use of precedent documents from one client on the work of another client “directly [leading] to the disclosure of information relating to the representation of a client?” Isn’t this only problematic in limited situations, where closed DM access is put in place? Why wouldn’t Opinion 512 view generative AI similarly, placing more restrictions in overall high risk situations? Or does the ABA think that it is inappropriate to use precedent materials in representation of other clients? We don’t know! Ask the ABA Standing Committee on Ethics and Professional Responsibility (except their names aren’t easily accessible online)!

What do you need to do if Opinion 512 applies?

If a piece of technology you’re using is subject to Opinion 512,

a client’s informed consent is required prior to inputting information relating to the representation into such a GAI tool.

When consent is required, it must be informed. For the consent to be informed, the client must have the lawyer’s best judgment about why the GAI tool is being used, the extent of and specific information about the risk, including particulars about the kinds of client information that will be disclosed, the ways in which others might use the information against the client’s interests, and a clear explanation of the GAI tool’s benefits to the representation. … To obtain informed consent when using a GAI tool, merely adding general, boiler-plate provisions to engagement letters purporting to authorize the lawyer to use GAI is not sufficient.

Talk to your client. This is good practice anyways. While clients have heard bad stories about things going wrong with generative AI and may be a bit hesitant, many clients (in our experience) yearn for their lawyers to work more efficiently.⁴

The formal process of getting consent from your client should involve the following components:

  1. Explanation of Purpose: The lawyer must clearly explain why the generative AI tool is being used. This includes detailing the specific tasks the tool will perform, such as drafting documents, analyzing contracts, or conducting legal research. For example, if a generative AI tool is employed to automate contract review, the lawyer should describe how it will identify key clauses and flag potential issues.

  2. Disclosure of Risks: The lawyer must provide a detailed explanation of the risks associated with the use of the generative AI tool. This includes the potential for data breaches, unauthorized access, and the inadvertent disclosure of confidential information. The lawyer should discuss how the generative AI tool’s self-learning capabilities might lead to unintended sharing of sensitive data, even within the firm. For instance, if a self-learning generative AI tool might reveal information about one client when responding to prompts related to another client’s matter, this risk must be explicitly communicated.

  3. Safeguards and Protections: The lawyer should outline the measures in place to protect the client’s information. This includes data encryption, access controls, and vendor policies that limit data access to authorized personnel only. For example, if the generative AI tool is hosted on a self-hosted cloud-based platform with strict access controls, these details should be shared to inform the client about data security.

  4. Potential Consequences: The lawyer must explain how the information could be used against the client’s interests if it were improperly disclosed. This includes discussing scenarios where the data might be accessed by opposing parties or otherwise exploited in legal proceedings. For example, if sensitive financial information is disclosed, it could impact the client’s negotiating position in a merger or acquisition.

  5. Benefits of the Generative AI Tool: Lawyers can also convey the benefits of using the generative AI tool. This potentially includes increased efficiency, reduced costs, increased scope of work, and improved accuracy in legal tasks. For instance, by using a generative AI tool to automate document review, the client may benefit from quicker turnaround times and lower legal fees, or perhaps same (or higher) fees but a significantly expanded scope of work.

  6. Specific and Tailored Consent: The client’s consent must be specific and tailored to the particular use of the generative AI tool in their case. This means the consent cannot rely on general, boilerplate language. Instead, the consent must explicitly cover the specific data to be used, the nature of the tasks the generative AI tool will perform, and the risks and safeguards involved. For example, a client engaged in due diligence contract review would need to consent explicitly to the use of generative AI in analyzing diligence documents, with clear terms addressing the confidentiality of those documents, and clients may then need to consent separately for using the outputs of this analysis in report generation.

Conclusion

We have been on the client side of a law firm AI consent/disclosure. When we sold Kira in 2021, the really excellent high-end firm representing Kira’s buyer gave us a form disclosure that they planned to use cloud-based AI technology (Kira!) to review our agreements in the course of due diligence. We—of course—said “okay!” As the CEO of the seller (though not the firm’s client), Noah found the disclosure really not useful, and he doesn’t know what was gained by providing it. Noah is skeptical that his feeling that the disclosure was kind of a waste would have been different if the buyer’s firm had followed the “What do you need to do if Opinion 512 applies?” guideline above.

In considering disclosure rules around generative AI, it’s worth considering what’s gained (beyond making lawyers or regulators feel good). Whether clients care about this disclosure. Whether they (or the lawyers explaining the tech) can actually understand the technology and its risks. Whether clients would like their lawyers spending effort on this. In general, barriers slow adoption, and Opinion 512 could do that to generative AI within US law firms. On balance, are organizations like the ABA helping clients if they discourage generative AI use (given that, at its best, AI can enable lawyers to deliver better quality work product for less effort/cost)?⁵

ABA and other guidance on generative AI will evolve and should improve over time. Ideally this piece makes it easier for lawyers to comply until things change.

Are we biased?

Does Formal Opinion 512 apply to Zuva? You might have noticed that Adam and Noah both work at Zuva. Are our feelings on Opinion 512 biased by our commercial interests? Maybe, but we think probably not.

As part of selling Kira in 2021, Zuva is pretty restricted in selling to law firms until 2026. Opinion 512 primarily applies to law firm lawyers. In-house teams (e.g., an in-house M&A team using Zuva to help them do faster and more accurate due diligence contract review) have different legal ethics restrictions than law firm lawyers working for clients do, and Opinion 512 is very focused on responsibilities of law firm lawyers.

Alternative legal service providers arguably could be subject to Opinion 512, and we have relationships with some of them. Also, users of our API include legaltech vendors and law firms, and they and their users could be subject to Opinion 512. So there are ways that Opinion 512 could impact us.

Basically, we don’t think we’re biased here, but you may disagree.

Since Zuva probably isn’t caught by Opinion 512, why did we bother writing this piece? Three reasons: First, we have lawyers using our AI tech. Opinion 512 is strongly worded, and we thought some might have questions about whether it applies to our system. Second, some software vendors who might sell to law firms are customers of our API. We thought this could be helpful for these other vendors and their law firm customers. Third, we saw Opinion 512, started talking about it a bunch, thought it could be interesting to share externally, but then realized our more editorial piece might not be as practically helpful as a piece like this, and then just did it.


1 It’s actually a bit more complicated than this, but—for the purposes of this post—not too meaningful.

2 Though Microsoft’s own tools will have their own terms.

3 The CIO of a very large, very high-end law firm recently told us that they occasionally receive client outside counsel guidelines requiring that access to the client’s work product not be shared internally. Apparently, the firm generally refuses to agree to this.

4 Of course, our experience may be partially caused by selection bias, in that clients we know often are ones who care more about efficiency.

5 The ABA Committee might say they are trying to help generative AI adoption by providing the clarity of instructions for its use. While Opinion 512 could be better, we are optimistic that ABA guidance on generative AI will improve over time.