Stop Treating AI Chats as Private: What <em>United States v. Heppner</em> Means for Privilege, Confidentiality, and Discovery.
It’s 2026, and many professionals still treat generative-AI chats like a private channel. In United States v. Heppner, a federal court rejected that assumption in a way that matters for privilege, confidentiality, and discovery. The court held that certain AI-generated materials created through a defendant’s interactions with Anthropic’s Claude were not protected by attorney-client privilege or the work-product doctrine, making them subject to production. For executives, in-house teams, and litigants using consumer AI tools to analyze sensitive facts or strategy, Heppner is a warning: your prompts and outputs may be treated like communications with a third party, which means it is not a protected channel. So, if you think your “private” prompt is safe, think again.
The underlying fact pattern is straightforward and increasingly common. After receiving a subpoena and before indictment, the defendant used a consumer generative-AI tool to help organize facts and develop defense-related materials. He did so on his own, not at counsel’s direction. Those AI-generated materials later became the subject of a discovery dispute, one the court resolved by concluding that neither attorney-client privilege nor work-product protection applied.
The court rejected the privilege claim at the threshold because the AI documents were not communications between the defendant and counsel. The court noted that Claude is not an attorney, and “in the absence of an attorney-client relationship, the discussion of legal issues between two non-attorneys is not protected by attorney-client privilege.” The opinion also emphasized that recognized privileges presuppose a trusting human relationship with a licensed professional who owes fiduciary duties and is subject to professional discipline, something that cannot exist between a user and a consumer AI platform.
Privilege also failed on confidentiality. The court relied heavily on Anthropic’s written privacy policy, which states that it collects both the users’ “inputs” and the model’s “outputs.” The company goes on to warn that it may use that data to “train” Claude, and reserves the right to disclose it to “third parties,” including “governmental regulatory authorities,” and in connection with “claims, disputes, or litigation.” On that basis, the court concluded the defendant could not have had a “reasonable expectation of confidentiality” in communications with Claude. The court distinguished this from private notes a client prepares for counsel because the defendant first shared the equivalent of those notes with a third party.
The work-product argument fared no better. Even assuming the materials were created in anticipation of litigation, the court held the doctrine did not apply because the AI Documents were not prepared “by or at the behest of counsel” and did not reflect defense counsel’s strategy. Defense counsel conceded that he “did not direct” the Claude searches and that the documents were prepared by the defendant “on his own volition.” The court concluded that the defendant was not acting as counsel’s agent for work-product purposes.
The implications are broader than this one defendant. Heppner is a warning to at least three groups: (1) self-represented litigants who use consumer AI to draft pleadings or test defenses; (2) corporate employees who paste internal contracts, investigation summaries, or privileged communications into AI tools for “summaries” or “risk flags”; and (3) criminal defendants and witnesses who use AI to organize facts or refine narratives while an investigation is underway. The common problem is the same: using a third-party platform to generate litigation-adjacent materials can undermine confidentiality and make those materials fair game in discovery.
Importantly, the opinion is not a blanket rule that “AI can never be involved in privileged work.” The court addressed the specific record before it: a consumer AI platform, used independently by the client, under terms that contemplate collection and potential disclosure of user data. On those facts, the privilege and work-product arguments failed. That leaves open the practical possibility that different facts—such as use of a contractually restricted enterprise system under counsel’s direction with strong confidentiality controls—could present a different analysis.
For lawyers practicing in the Fifth Circuit, the “necessary intermediary” concept is not an easy escape hatch. Under Fifth Circuit authority applying the United States v. Kovel framework, extending privilege to third-party assistance turns on necessity and integration into the provision of legal advice—not convenience. If an AI tool is used simply because it is faster or cheaper, that record is a problem. To have any serious privilege argument, the use must be tightly controlled, at counsel’s direction, and demonstrably necessary to interpret or translate information for legal advice.
The Heppner ruling is likely the first of many fights over generative AI’s proper role in litigation. The practical takeaway for clients and attorneys is simple: don’t treat consumer AI like a privileged channel unless the tool is contractually locked down, used at counsel’s direction, and truly necessary to provide legal advice.
The safest course is to assume that every prompt you type could become a future exhibit. If the facts are sensitive, don’t type them into a browser window or a public-facing chatbot. Instead, use an enterprise system your lawyer has approved, with terms that prohibit provider human review and prohibit training on your data. However, given the quickly shifting legal landscape, the safest option may be the most conventional: close the laptop, lock your phone, and talk to a human attorney who actually owes you a duty of confidentiality.