AI Chats Are Discoverable: What Lawyers Are Missing About Consumer vs. Enterprise Tools


Most of the conversation around AI in the legal profession is still stuck on the wrong issue.

We keep talking about hallucinations. Fake cases. Bad citations. The fear is that lawyers will rely on AI, get the law wrong, and embarrass themselves in court. That risk is real, and courts have already dealt with it in a very direct way. But it is also the easiest problem to fix.

You can avoid fake law by doing what lawyers are already required to do: read what you cite.

The harder problem, and the one that is quietly becoming far more important, is that lawyers and clients are creating discoverable evidence every time they use consumer AI tools. And most of them do not realize it. Once you understand that, the rest of the analysis follows pretty quickly.

The real distinction is not AI versus no AI. It is consumer AI versus enterprise AI, and whether the information being entered into these systems is actually protected or effectively being disclosed.

Courts are not struggling with this. They are applying existing rules. And those rules are not particularly forgiving.

The issue is not just that AI can be wrong. It is that, in many cases, using consumer AI is functionally no different than sharing information with a third party.

The Obvious Problem Everyone Focuses On

The starting point for most discussions is still Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). In that case, attorneys filed a brief citing judicial decisions that did not exist. The cases were generated by an AI tool, and no one verified them before filing.

The court did not treat this as a technology issue. It treated it as a Rule 11 problem.

Attorneys have an affirmative duty to conduct a reasonable inquiry into the law before filing anything. That duty does not change because a tool produces something that looks polished. A fake case is not a weak argument. It is not law at all.

The court sanctioned the attorneys, and more importantly, emphasized that lawyers act as gatekeepers. You can use tools, but you cannot outsource judgment.

California followed quickly. In Noland v. Land of the Free, L.P., 114 Cal. App. 5th 426 (2025), an appellate brief contained 23 quotations, 21 of which were fabricated by generative AI. The Court of Appeal published the opinion specifically to make the point that should not need to be said: if you cite a case, you are expected to have read it.

These cases matter. But they are also the low-hanging fruit. They are obvious failures, and they are preventable.

The profession will adjust to that.

The Problem That Actually Matters

The more consequential issue is not accuracy. It is confidentiality.

Lawyers are used to thinking of drafting as a private process. You test arguments, explore facts, and refine strategy in a space that feels internal. Historically, that intuition has been correct.

Consumer AI tools create the same feeling. You type in facts, ask questions, and receive something that looks like work product. It feels like you are thinking out loud in a private workspace.

You are not.

That distinction is no longer theoretical. In United States v. Heppner (S.D.N.Y. Feb. 17, 2026), the court addressed whether materials created through a publicly available AI platform were protected by attorney-client privilege or the work product doctrine.

The answer was no.

The court’s reasoning was straightforward and, frankly, inevitable. Communications with a public AI system are not communications with counsel. They are not inherently confidential. And to the extent information is shared with the platform, it is shared with a third party.

That last point matters most. Once information is disclosed outside a protected relationship, privilege is either lost or never attaches in the first place.

The court went even further and made clear that even if the user inputs information originally learned from counsel, sharing that information with a public AI tool can waive privilege.

That is not a subtle shift. It is a structural one.

Why “Consumer vs. Enterprise” Actually Matters

This is where most lawyers misunderstand the issue.

The difference between consumer and enterprise AI is not branding or marketing. It is legal risk.

Consumer tools are built for broad public use. They often retain inputs, process them externally, and operate under terms that do not guarantee confidentiality in any legally meaningful sense. From a litigation perspective, that means anything entered into those systems may later exist as discoverable data.

Enterprise systems, by contrast, are designed with contractual confidentiality, restricted data use, and controlled environments. When properly implemented, they can function more like other secure legal technologies that lawyers already rely on.

But even that distinction has limits. Enterprise tools reduce risk. They do not eliminate it. Lawyers still need to understand how the system works, what data is stored, and whether anything is shared beyond the platform.

The key point is that courts are not creating new rules for AI. They are applying existing ones. If you disclose information to a third party, you should assume it may be discoverable.

In other words, the risk is not just discoverability. It is that privilege may never attach at all, or may be waived by the disclosure itself.

AI Chats Are Just Another Category of Evidence

Once you look at it that way, the discovery implications become obvious.

AI interactions are just another form of electronically stored information. They sit alongside emails, text messages, and internal notes. If they are relevant, they are potentially discoverable.

In some cases, they may be more revealing than traditional evidence. AI chats often capture how someone was thinking in real time. They can include summaries of events, admissions, evolving legal theories, or attempts to frame a narrative.

That is exactly the type of material opposing counsel will want.

And unlike a rough draft that never leaves a lawyer’s computer, these interactions may exist on third-party systems with their own retention policies.

Preservation Obligations Now Include AI

This is where practice has not caught up yet.

Once litigation is reasonably anticipated, parties have a duty to preserve relevant electronically stored information. There is no exception for AI chats, prompts, or outputs. If anything, those materials should be presumed discoverable unless there is a clear reason otherwise.

That means litigation hold notices need to evolve.

It is no longer enough to tell clients to preserve emails and text messages. They should also be instructed to preserve any AI-related materials, including chat histories, prompts, and outputs generated in connection with the dispute.

Just as importantly, they need to be told not to delete or “clean up” those interactions. The instinct to treat AI chats as disposable is understandable, but it creates real spoliation risk if the data turns out to be relevant.

This is not a new legal obligation. It is an existing one applied to a new category of data.

The Takeaway Lawyers Should Actually Focus On

None of this means lawyers should avoid AI. Used properly, these tools can improve efficiency and help manage costs in ways that are difficult to ignore.

But the way many lawyers and clients are currently using AI, particularly through consumer platforms, is creating a parallel record of communications that is neither privileged nor protected.

That record may ultimately become evidence.

The law has not changed. Courts are applying the same principles they always have. What has changed is how easily people can create a detailed, timestamped record of their own thoughts, strategies, and narratives without realizing it.

And in litigation, those records rarely stay private.

Leave a comment