
THE TECHNOLOGY BLIND SPOT
On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York looked at defense counsel and said nine words that should stop every attorney who uses AI mid-keystroke: “I’m not seeing remotely any basis for any claim of attorney-client privilege.”
The case involved Bradley Heppner, a Texas financial services executive charged with orchestrating a $150 million securities and wire fraud scheme. After learning he had become a law enforcement target in 2025, Heppner turned to Anthropic’s Claude, a commercially available AI tool, to prepare 31 documents analyzing his legal situation. He outlined defense strategies. He explored legal arguments. He organized his thoughts about the government’s investigation. Then he sent those documents to his defense counsel at Quinn Emanuel Urquhart & Sullivan.
Federal agents seized the documents during Heppner’s arrest at his Dallas mansion. The government moved to compel production. Quinn Emanuel objected, asserting both attorney-client privilege and work product protection. Judge Rakoff rejected both claims from the bench, later issuing a written opinion that erected a doctrinal framework with implications well beyond the facts of this case.
The reasoning sent a clear signal to the legal profession: when you feed privileged information into a third-party commercial AI tool that expressly disclaims confidentiality, you have voluntarily disclosed that information to a third party. Privilege evaporates. And the fact that you later shared the output with your lawyer cannot retroactively restore what you already gave away.
The Direct Answer
United States v. Heppner establishes that documents created using consumer AI tools and later transmitted to counsel are not protected by attorney-client privilege or work product doctrine. The ruling exposes a structural vulnerability in how attorneys and clients use AI, and it points toward a specific solution: law firms must deploy their own AI systems, operating within the firm’s confidentiality perimeter, to preserve privilege for AI-assisted legal work.
This analysis presents both sides of the Heppner argument, examines why the ruling follows logically from existing privilege doctrine, and argues that law firm-owned large language models (LLMs) or small language models (SLMs) represent the path forward for maintaining privilege in an AI-augmented legal practice.
What Happened in Heppner
The facts matter because they reveal how easily privilege can collapse in the AI context. Heppner, already represented by Quinn Emanuel, used Anthropic’s consumer version of Claude to prepare documents related to his criminal case. Defense counsel later informed the government that Heppner had “run queries related to the Government’s investigation through an AI tool (Claude) created by a third-party company, Anthropic,” and that the resulting documents would appear on seized electronic devices.
The government filed a motion for a ruling that these 31 AI-generated documents carried no privilege protection. The motion attacked on two fronts: attorney-client privilege and work product doctrine.
The Government’s Case: Why Privilege Failed
The government’s arguments built systematically on established privilege law, and every element landed.
Attorney-client privilege requires four elements: a communication, among only privileged parties, made for the purpose of obtaining or providing legal advice, that remains confidential when made and intended to remain confidential. The government demonstrated that Heppner’s AI interactions failed every element.
First, Claude holds no law license, owes no duty of loyalty or confidentiality, and operates outside professional regulation. The tool’s own documentation states that it “chooses the response that least gives the impression of giving specific legal advice” and “instead suggests asking a lawyer.” An AI chatbot that actively disclaims providing legal advice cannot serve as the attorney in an attorney-client communication.
Second, Anthropic’s privacy policy informed users that the company gathers prompts and outputs, uses them to train its models, and may disclose this data to “governmental regulatory authorities” and “third parties.” Heppner agreed to these terms when he used the service. He voluntarily disclosed information to a commercial entity that expressly told him it would not maintain confidentiality. The government analogized this to conducting Google searches or checking out library books: the underlying activity does not become privileged simply because the user later discusses the results with a lawyer.
Third, the government invoked the well-settled rule that preexisting unprivileged documents do not become privileged merely because a client later transmits them to counsel. Heppner created the documents first, then sent them to Quinn Emanuel. The sequence ran backward. You cannot retroactively cloak an unprivileged document with privilege by handing it to your lawyer after the fact.
The Defense’s Arguments: Why Privilege Should Apply
Quinn Emanuel advanced two theories, neither of which gained traction.
On attorney-client privilege, defense counsel argued that the documents incorporated information Quinn Emanuel had conveyed to Heppner during the representation. The documents existed for the “express purpose of talking to counsel” and obtaining legal advice. They served as a mechanism for consolidating the client’s thoughts to share with his attorneys. Under this theory, the documents functioned as client-to-attorney communications prepared in anticipation of legal consultation.
On work product, defense counsel argued that a report created by a client in anticipation of litigation should receive protection even without direction from legal counsel. Attorney Benjamin O’Neil contended that the documents “incorporate information that Quinn Emanuel conveyed to Heppner” and that “it’s not as if these were documents created during the course of the alleged scheme.” He conceded, however, that the documents “were prepared by the defendant of his own volition.”
That concession proved fatal. Prosecutor Alexandra Rothman responded that the documents did not “reflect the legal strategy” of Heppner’s defense team. Judge Rakoff agreed.
The Ruling
Judge Rakoff disposed of the privilege claim immediately from the bench on February 10: “I’m not seeing remotely any basis for any claim of attorney-client privilege.” The defendant “disclosed it to a third-party, in effect, AI, which had an express provision that what was submitted was not confidential.” Communications among non-privileged parties, transmitted through a platform that expressly disclaims confidentiality, cannot satisfy the privilege framework.
The written opinion, issued February 17, 2026, grounded the ruling in a framework that extends beyond Heppner’s specific facts. Rakoff identified “at least two, if not all three” elements of attorney-client privilege as absent from Heppner’s AI interactions, then systematically addressed each one.
On the first element, the court confronted a counterargument that several commentators had raised: that AI inputs function more like the use of cloud-based word processing software than like communications, making the question of whether Claude qualifies as an “attorney” irrelevant. Rakoff rejected this reframing. The argument, he found, “only cuts against the invocation of privilege” because all recognized privileges require “a trusting human relationship,” specifically a relationship “with a licensed professional who owes fiduciary duties and is subject to discipline.” No such relationship exists, or could exist, between a user and an AI platform. The court drew this framework from Professor Ira P. Robbins’s scholarship identifying four elements common to all recognized privileges: a trusting human relationship, an enforceable duty of confidentiality on the recipient, communications made for the protected purpose, and a public interest sufficient to outweigh lost evidence. AI interactions fail the first two elements categorically.
On confidentiality, Rakoff sharpened the analysis beyond the oral ruling. He cited a recent decision from the same district observing that AI users lack “substantial privacy interests” in conversations they “voluntarily disclosed” to a publicly accessible platform. The court explicitly rejected the analogy between AI-generated documents and confidential client notes prepared for attorney consultation: Heppner “first shared the equivalent of his notes with a third-party, Claude.”
On whether Heppner communicated with Claude for the purpose of obtaining legal advice, Rakoff acknowledged “a closer call” because Heppner’s counsel asserted the documents existed for the “express purpose of talking to counsel.” But Heppner did not use Claude at counsel’s suggestion or direction. Had counsel directed Heppner to use Claude, the AI “might arguably” have functioned as a lawyer’s agent within the privilege framework. Without that direction, the relevant question became whether Heppner intended to obtain legal advice from Claude itself. Claude disclaims providing legal advice. When the government asked Claude directly, it responded: “I’m not a lawyer and can’t provide formal legal advice or recommendations.”
The opinion’s most striking language addressed the defense’s attempt to retroactively claim privilege. Non-privileged communications, Rakoff wrote, “are not somehow alchemically changed into privileged ones upon being shared with counsel.” Documents that “would not be privileged if they remained in [Heppner’s] hands” did not “acquire protection merely because they were transferred” to counsel.
In a footnote, Rakoff delivered the ruling’s sharpest blow to the defense. Even the privileged information that Quinn Emanuel had conveyed to Heppner during the representation lost its protection once Heppner fed it into Claude. Sharing privileged attorney-client communications with a commercial AI platform constitutes waiver, “just as if he had shared it with any other third party.” The privilege belongs to the client, but so does the responsibility to maintain it. Heppner’s 31 documents did not merely fail to acquire new privilege. They destroyed existing privilege over information his own lawyers had given him.
On work product, the written opinion reinforced the oral ruling. Heppner’s counsel confirmed that the documents “were prepared by the defendant on his own volition.” That concession proved dispositive: Heppner acted outside the scope of attorney direction, so the documents neither qualified as materials prepared “by or at the behest of counsel” nor reflected defense counsel’s strategy. Rakoff also took the unusual step of respectfully disagreeing with Shih v. Petal Card, Inc.(S.D.N.Y. 2021), a magistrate judge decision that had extended work product protection to litigation-related communications even without attorney direction. That holding, Rakoff wrote, “undermines the policy animating the work product doctrine,” which exists to protect lawyers’ mental processes, not a client’s independent research.
Judge Rakoff added one notable caveat: if prosecutors attempt to use the AI-generated information at trial, it could create a “witness-advocate conflict” because Quinn Emanuel would become a witness to the circumstances of the documents’ creation. That scenario could force a mistrial.
Steelmanning the Ruling: Why It Follows from Existing Doctrine
The defense’s position, though sympathetic, collides with fundamental privilege architecture. Attorney-client privilege exists to protect a specific relationship: confidential communications between a client and a licensed attorney for the purpose of legal advice. Every element serves a gating function.
Heppner communicated with a commercial software product, not an attorney. He did so through a platform that explicitly told him his inputs would not remain confidential. He acted on his own initiative, not at counsel’s direction. And he created the documents before transmitting them to his lawyers, inverting the sequence that privilege doctrine requires.
Remove the AI element entirely: if Heppner had dictated his legal analysis to a stenographer at a public notary office whose terms of service stated that all dictation would be recorded, retained, and potentially shared with third parties, no court would find privilege. The AI tool occupies the same structural position as that stenographer. The technology is novel. The privilege analysis is not.
The written opinion added a doctrinal layer that the oral ruling only implied. Rakoff situated Heppner within the broader architecture of privilege law by adopting Professor Robbins’s framework from the Harvard Journal of Law and Technology: all recognized privileges, whether attorney-client, psychotherapist-patient, spousal, or clergy, share a common foundation in human relationships characterized by trust, fiduciary obligation, and professional accountability. As Robbins argued, “the law protects relationships, not functionalities. A calculator can compute like an accountant, but nobody suggests a calculator privilege.” The same logic applies to a generative model that drafts a brief or analyzes legal exposure. Simulation of a professional function does not create a privileged tie.
This framework matters beyond Heppner because it forecloses the most sophisticated argument for AI privilege: that the functional output of AI tools resembles the output of privileged professionals closely enough to justify extending protection. Rakoff’s adoption of the “trusting human relationship” requirement establishes that privilege analysis begins with the nature of the relationship, not the nature of the output. No amount of technological sophistication in AI systems can satisfy a test that requires human fiduciary obligation, professional licensure, and the possibility of discipline.
The defense’s strongest argument, that these documents functioned as client notes prepared for attorney consultation, runs into the third-party disclosure problem. A client’s handwritten notes summarizing thoughts for their lawyer retain privilege because no third party ever accesses them. Heppner’s “notes” passed through Anthropic’s servers, subject to Anthropic’s terms, processed by Anthropic’s systems. That intermediary step, voluntary and informed, breaks the confidentiality seal.
The Structural Problem: Why Consumer AI Tools Cannot Preserve Privilege
Heppner exposed a structural mismatch between privilege doctrine and consumer AI architecture. Privilege assumes a private bilateral channel: client communicates with lawyer, lawyer advises client, and both maintain confidentiality. The doctrine tolerates limited extensions of this channel through the Kovel framework (United States v. Kovel, 296 F.2d 918 (2d Cir. 1961)), which allows third parties like accountants, investigators, and translators to participate when their involvement proves “reasonably necessary” for the attorney to provide competent advice.
Consumer AI tools fail the Kovel test on multiple dimensions. The AI vendor operates independently of the attorney. The vendor’s terms of service prioritize the vendor’s interests, not the client’s confidentiality. The vendor retains rights to use, store, and potentially disclose user inputs. And the vendor has no contractual obligation to maintain the confidentiality standards that privilege requires.
This analysis aligns with the concerns raised throughout this blog series. In the Email Privacy series (Parts 1 through 3), I documented how free email providers scan, store, and index privileged communications, creating the same structural vulnerability: a third party with no duty to protect privilege sitting between attorney and client. In the FBI texting and phone call security analyses, I examined how compromised telecommunications infrastructure threatens privilege when attorneys use channels they do not control. Heppner represents the AI-specific manifestation of the same underlying pattern: attorneys and clients routing privileged information through commercial platforms that expressly disclaim confidentiality obligations.
As Debevoise & Plimpton’s analysis of Heppner noted, the critical distinction runs between consumer and enterprise AI tools. Enterprise deployments that contractually prohibit training on client inputs and maintain confidentiality protections occupy different ground than consumer products. But even enterprise arrangements involve a third-party vendor, leaving residual privilege risk. Only one architecture eliminates the third-party problem entirely.
The Solution: Law Firm-Owned AI as Privileged Infrastructure
The Kovel doctrine provides the framework. For over six decades, courts have recognized that attorney-client privilege extends to communications involving third parties when their participation proves reasonably necessary for the attorney to render competent legal advice. The doctrine covers paralegals, law clerks, stenographers, investigators, accountants, and even public relations consultants when they operate under attorney supervision for the purpose of facilitating legal advice.
A law firm-owned LLM or SLM operates as the functional equivalent of these privileged agents. It performs legal research, assists with document drafting, analyzes contracts, reviews discovery materials, and helps attorneys synthesize complex information to advise clients. It performs the same tasks that junior associates, paralegals, and research librarians have performed for decades, all of whom enjoy privilege protection when working under attorney supervision within the firm.
The key distinction: every flaw that killed privilege in Heppner disappears when the AI operates as firm infrastructure rather than a third-party commercial service.
The written opinion’s “trusting human relationship” framework reinforces this conclusion from a different angle. When an attorney uses a firm-owned LLM, the trusting human relationship (attorney-client) exists before, during, and after the AI interaction. The AI operates as an instrument within that relationship, no different from a legal research database or a document management system. The relationship that privilege protects remains intact because the AI never displaces it. Heppner failed precisely because the AI interaction occurred outside any trusting human relationship. Firm-owned systems ensure it never leaves one.
Third-party platform becomes firm infrastructure. Heppner used Anthropic’s consumer product, a commercial platform operated by an independent company with its own interests and disclosure obligations. A firm-owned LLM runs on servers the firm controls, behind the firm’s firewall, within the firm’s confidentiality perimeter. No external entity touches the data. The AI operates like the firm’s email servers, document management systems, and legal research databases: infrastructure under attorney control, subject to the firm’s professional obligations.
No expectation of confidentiality becomes full professional confidentiality. Anthropic’s terms told Heppner his inputs would not remain confidential. A firm-owned system operates under the firm’s existing confidentiality obligations. ABA Model Rule 1.6 binds the firm and every attorney in it. The system exists within the same confidentiality envelope that protects every other piece of firm infrastructure.
Independent client action becomes attorney-directed use. Heppner acted “on his own initiative,” not at counsel’s direction. When attorneys use a firm-owned LLM to research legal issues, draft memoranda, or analyze documents for client matters, they act in their professional capacity under the attorney-client relationship. When clients use the system at their attorney’s direction, the direction element that Heppner lacked becomes present. The government in Heppner conceded that work product protection could apply “if counsel directed the defendant to run the AI searches.”
Retroactive privilege claim becomes privilege from inception. Heppner created documents through a non-confidential channel, then tried to cloak them with privilege by handing them to his lawyers. A firm-owned LLM operates within the privileged relationship from the first keystroke. Communications flow from client to attorney, attorney uses LLM as a research and drafting tool, attorney advises client. Privilege exists throughout because the entire workflow occurs within the confidential attorney-client relationship.
Existing Precedent Supports This Framework
Courts already recognize privilege protection for attorney technology tools. Email systems, even those hosted by third-party cloud providers, maintain privilege because the firm controls access and contractually ensures confidentiality. Document management systems store privileged materials on servers the firm selects and secures. Legal research platforms like Westlaw and LexisNexis process attorney search queries that reveal litigation strategy, yet no court has suggested that using these platforms waives privilege. E-discovery platforms, secure client portals, and case management software all operate within the privileged relationship without destroying it.
The principle: the technological nature of a tool does not defeat privilege when the tool operates within a confidential attorney-client relationship under attorney supervision and control. A firm-owned LLM fits squarely within this principle.
Recent case law reinforces this position. In Concord Music Group, Inc. v. Anthropic PBC (N.D. Cal. 2025) and Tremblay v. OpenAI, Inc. (N.D. Cal. 2024), courts found that the work product doctrine can protect AI-generated content where the prompts and use meet the criteria for asserting the protection. The distinction turns not on whether AI participated but on whether the use occurred within the privileged relationship at counsel’s direction.
Upjohn Co. v. United States, 449 U.S. 383 (1981), extends the framework further. The Supreme Court held that privilege protects communications between corporate attorneys and all employees, not just the control group, when those communications serve to gather information for providing legal advice. The principle: privilege protects the information-gathering mechanisms attorneys use to provide competent advice. A firm-owned AI system gathering, analyzing, and synthesizing information at an attorney’s direction performs the same function as the employee interviews Upjohn protected.
The Ethical Imperative
ABA Model Rule 1.1, Comment 8, requires attorneys to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” AI tools now qualify as relevant technology for legal practice. Attorneys who refuse to adopt AI capabilities may find themselves unable to deliver competent representation at competitive price points, particularly as AI-augmented competitors increase efficiency.
But competence cuts both ways. ABA Formal Opinion 512 (July 2024) addressed generative AI directly, requiring attorneys to maintain client confidentiality when using AI tools and to obtain informed consent when confidentiality concerns exist. Rule 1.6 mandates vigilance in protecting client information across all digital interactions. Rule 1.4 may require attorneys to consult with clients about AI use in their matters.
These obligations create a pincer: attorneys must adopt AI to remain competent, yet they must protect privilege while doing so. Consumer AI tools, as Heppner demonstrates, cannot satisfy both requirements simultaneously. Firm-owned systems can. They allow attorneys to use AI for research, drafting, and analysis while keeping every interaction within the firm’s confidentiality obligations.
This connects directly to the billing ethics discussed in the Leverage Trap analysis. ABA Formal Opinion 512 established that attorneys who bill hourly “must bill for actual time spent working” when using AI, and must “account for efficiencies when charging clients flat fees.” Firms that deploy their own AI systems can capture efficiency gains, transition to alternative fee arrangements, and protect privilege simultaneously. Firms that rely on consumer tools risk privilege waiver on one side and billing ethics violations on the other.
Practice-Specific Implications
Criminal Defense. Heppner arose in the criminal context, and the stakes could not be higher. Defense strategy, witness assessments, plea negotiation positions, and constitutional arguments represent the most sensitive attorney-client communications in legal practice. Any criminal defense attorney whose client uses a consumer AI tool to “organize thoughts” before meeting with counsel now faces the prospect that those documents become government evidence. Firm-controlled AI systems for client collaboration eliminate this exposure.
Corporate and M&A. Transaction communications often involve material nonpublic information. As documented in the Why Hackers Target Law Firms analysis, the 2016 prosecutions of hackers who breached Cravath, Swaine & Moore and Weil, Gotshal & Manges to steal M&A intelligence demonstrate the premium that deal information commands. AI-assisted due diligence, contract analysis, and deal structuring must occur within privileged systems. Consumer AI tools that retain and potentially train on deal information create both privilege and securities law exposure.
Internal Investigations. The most immediate expansion of Heppner’s logic threatens corporate internal investigations. If employees use consumer AI tools to “organize their thoughts” before interviews with company counsel, those AI-generated summaries, risk analyses, and derivative drafts become discoverable. Opposing parties in litigation, regulatory enforcement actions, and shareholder derivative suits will seek access to any AI-assisted internal investigation materials that passed through third-party platforms.
Intellectual Property. Trade secret protection requires “reasonable efforts” to maintain secrecy. Feeding trade secret information into a consumer AI tool whose terms allow data retention and potential disclosure undermines the very protection the attorney seeks to establish. Firm-owned AI systems allow attorneys to analyze trade secret issues without compromising the secrecy element.
What to Do Tomorrow
Prohibit consumer AI tools for any work involving client information. Heppner’s facts involved a client acting independently, but the logic applies equally to attorneys. If a firm attorney pastes privileged client communications into ChatGPT, Claude’s free tier, or any consumer AI tool whose terms disclaim confidentiality, the same third-party disclosure analysis applies. Issue firm-wide guidance immediately.
Evaluate firm-owned or enterprise AI deployments. Enterprise AI products that contractually prohibit training on inputs and maintain confidentiality protections represent an intermediate step. Firm-owned systems deployed behind the firm’s firewall provide the strongest privilege protection. The investment in proprietary AI infrastructure protects both the firm and its clients.
Update engagement letters. Address AI use in client agreements. Specify what AI tools the firm uses, describe confidentiality protections, and obtain informed consent. Explicitly instruct clients not to use consumer AI tools for anything related to their legal matters without first consulting counsel.
Train attorneys and staff. Model Rule 5.3 extends supervisory responsibility to nonlawyer assistants’ conduct. If a paralegal uses a consumer AI tool for privileged research because nobody told them not to, the supervising attorney bears responsibility. Training must cover which tools are approved, which are prohibited, and why the distinction matters.
Document AI use in privilege logs. As Debevoise’s analysis recommended, privilege logs should clearly denote: (a) the AI tool used with expectation of confidentiality, (b) that the communication reflects legal advice from an attorney or work performed at the direction of counsel, and (c) the confidentiality protections in place. Contemporaneous documentation of these elements provides the evidentiary foundation for privilege claims.
Advise clients directly. Heppner lost privilege because he acted independently. Tell your clients, in writing, at the outset of every representation: do not use consumer AI tools to prepare materials related to your legal matter. If you want AI assistance, we will provide it through our secure systems. The five-minute conversation now prevents the privilege catastrophe later.
The Front Door Problem
Throughout this blog series, a pattern repeats: attorneys and clients route privileged information through systems they do not control, operated by entities with no duty to protect attorney-client confidentiality. Free email providers scan privileged communications. Compromised telecom networks expose privileged phone calls. Weak passwords hand privileged files to ransomware operators. And now, consumer AI tools process privileged information under terms of service that explicitly disclaim confidentiality.
Each Technology Blind Spot blog has identified the same structural flaw: a third party sitting between attorney and client with the means, right, and sometimes the obligation to access privileged information.
Heppner brings the pattern into sharpest focus because the disclosure is the most intentional. Heppner did not accidentally expose his communications through a weak password or an insecure email provider. He deliberately typed his defense strategy into a commercial platform that told him, in writing, that it would not maintain confidentiality. Judge Rakoff’s written opinion closed with a sentence that should hang in every law firm’s server room: “AI’s novelty does not mean that its use is not subject to longstanding legal principles.” The technology changed. The doctrine did not. And the doctrine requires something no AI platform can provide: a trusting human relationship with a licensed professional who owes fiduciary duties and faces discipline for violating them.
The solution also follows from the pattern. In Part 3 of the Email Privacy series, I argued that secure client portals, borrowed from healthcare’s HIPAA compliance model, eliminate the third-party email problem by keeping communications under attorney control. Firm-owned AI systems apply the same principle to a different technology: keep privileged information within infrastructure the firm controls, subject to the firm’s professional obligations, accessible only to privileged parties.
Heppner’s 31 documents now sit in the government’s evidence file. His defense strategy, his legal theories, his private analysis of his own criminal exposure, all of it available to the prosecutors building the case against him. He typed every word of it into a commercial chatbot that told him it would not keep his secrets.
The privilege framework has not changed. The technology has. The profession’s obligation is to match the technology to the framework, not to hope the framework bends to accommodate convenient tools.
Your clients’ privilege is only as strong as the system carrying their secrets. Choose the system accordingly.
This blog provides general information for educational purposes only and does not constitute legal advice. Consult qualified counsel for advice on specific situations.
About the Author
JD Morris is Co-Founder and COO of LexAxiom. With over 20 years of enterprise technology experience and credentials including an MLS from Texas A&M, MEng from George Washington University, and dual MBAs from Columbia Business School and Berkeley Haas, JD focuses on the intersection of legal technology, cybersecurity, and professional responsibility.
Connect: LinkedIn | X | Bluesky
Appendix: AI Platform Confidentiality Disclaimers
Judge Rakoff’s ruling in Heppner turned on a specific factual finding: Anthropic’s terms of service expressly disclaimed confidentiality. That disclaimer is not unique to Anthropic. Every major consumer AI platform includes substantially similar language. The following excerpts from each platform’s current terms of service demonstrate that no consumer AI tool provides the confidentiality protections that attorney-client privilege requires.
A. OpenAI (ChatGPT): Consumer Terms of Use
Warranty Disclaimer:
“OUR SERVICES ARE PROVIDED ‘AS IS.’ EXCEPT TO THE EXTENT PROHIBITED BY LAW, WE AND OUR AFFILIATES AND LICENSORS MAKE NO WARRANTIES (EXPRESS, IMPLIED, STATUTORY OR OTHERWISE) WITH RESPECT TO THE SERVICES, AND DISCLAIM ALL WARRANTIES INCLUDING, BUT NOT LIMITED TO, WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, SATISFACTORY QUALITY, NON-INFRINGEMENT, AND QUIET ENJOYMENT… WE DO NOT WARRANT THAT THE SERVICES WILL BE UNINTERRUPTED, ACCURATE OR ERROR FREE, OR THAT ANY CONTENT WILL BE SECURE OR NOT LOST OR ALTERED.”
Data Usage:
“We may use Content to provide, maintain, develop, and improve our Services, comply with applicable law, enforce our terms and policies, and keep our Services safe.”
OpenAI permits users to opt out of data training, but notes that “in some cases this may limit the ability of our Services to better address your specific use case.” The opt-out applies only to future training; data submitted before opting out remains subject to the original terms. The “as is” disclaimer and the express statement that content may not “be secure or not lost or altered” negate any reasonable expectation of confidentiality under privilege doctrine.
Source: OpenAI Terms of Use, https://openai.com/policies/row-terms-of-use/ (accessed February 13, 2026)
B. Anthropic (Claude): Consumer Terms of Service
Warranty Disclaimer:
“YOUR USE OF THE SERVICES, MATERIALS, AND ACTIONS IS SOLELY AT YOUR OWN RISK. THE SERVICES, OUTPUTS, AND ACTIONS ARE PROVIDED ON AN ‘AS IS’ AND ‘AS AVAILABLE’ BASIS AND, TO THE FULLEST EXTENT PERMISSIBLE UNDER APPLICABLE LAW, ARE PROVIDED WITHOUT WARRANTIES OF ANY KIND, WHETHER EXPRESS, IMPLIED, OR STATUTORY. WE AND OUR PROVIDERS EXPRESSLY DISCLAIM ANY AND ALL WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE, TITLE, MERCHANTABILITY, ACCURACY, AVAILABILITY, RELIABILITY, SECURITY, PRIVACY, COMPATIBILITY, NON-INFRINGEMENT…”
Data Training:
Anthropic’s consumer terms (governing Claude.ai, Claude Pro, and Claude Max) permit the company to use conversation data for model training unless the user affirmatively opts out. Only Commercial and Enterprise accounts, governed by separate terms with a Data Processing Addendum, contractually prohibit training on customer content. The consumer warranty disclaimer expressly disclaims warranties of “security” and “privacy,” language that directly undermines any claim that user inputs remain confidential. This is the specific provision Judge Rakoff cited in Heppner when he found that Anthropic had “an express provision that what was submitted was not confidential.”
Source: Anthropic Consumer Terms of Service, https://www.anthropic.com/legal/consumer-terms (accessed February 13, 2026)
C. Perplexity AI: Terms of Service and Privacy Policy
Warranty Disclaimer:
“Your access to and use of the Services are at your own risk. You understand and agree that the Services… are provided to you on an ‘AS IS’ and ‘AS AVAILABLE’ basis. Without limiting the foregoing, to the maximum extent permitted under applicable law, the Company Entities DISCLAIM ALL WARRANTIES AND CONDITIONS, WHETHER EXPRESS OR IMPLIED, OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT.”
Security Disclaimer:
“Despite our reasonable efforts to protect your information, no security measures are impenetrable, and we cannot guarantee ‘perfect security.’ Any information you send to us electronically, while using the Services or otherwise interacting with us, may not be secure while in transit. We recommend that you do not use unsecure channels to send us sensitive or confidential information.”
Perplexity’s privacy policy contains a rare moment of candor: it explicitly recommends against sending “sensitive or confidential information.” The company’s Enterprise DPA states that “Personal Data will not be used for training of Perplexity’s large language models,” but this protection applies only to enterprise customers. Consumer and Pro users receive no such commitment. The terms disclaim all warranties regarding “the completeness, accuracy, availability, timeliness, security or reliability of the Services” and disclaim responsibility for “the deletion of, or the failure to store or transmit, Your Content.”
Sources: Perplexity Terms of Service, https://www.perplexity.ai/hub/legal/terms-of-service; Perplexity Privacy Policy, https://www.perplexity.ai/hub/legal/privacy-policy; Perplexity Enterprise DPA, https://www.perplexity.ai/hub/legal/dpa (accessed February 13, 2026)
D. Google (Gemini): Gemini Apps Privacy Notice and API Terms
Consumer (Gemini Apps): Human Review Warning:
“Please don’t enter confidential information that you wouldn’t want a reviewer to see or Google to use to improve our services, including machine-learning technologies.”
Data Usage and Human Review:
“Google uses your activity to provide, develop, and improve its services (including training generative AI models), as well as to protect Google, its users, and the public with the help of human reviewers.”
Unpaid API Services:
“When you use Unpaid Services, including, for example, Google AI Studio and the unpaid quota on Gemini API, Google uses the content you submit to the Services and any generated responses to provide, improve, and develop Google products and services and machine learning technologies… human reviewers may read, annotate, and process your API input and output… Do not submit sensitive, confidential, or personal information to the Unpaid Services.”
Google stands alone among the four platforms in issuing a direct, unambiguous instruction: “don’t enter confidential information.” The company discloses that human reviewers read consumer conversations and that this data trains generative AI models. Google Workspace customers with enterprise Gemini licenses receive different terms: content “is not human reviewed or used for Generative AI model training outside their domain without permission.” The gap between consumer and enterprise protections mirrors the pattern across all four platforms. For attorneys and clients using consumer Gemini, Google’s own terms constitute an express instruction not to submit privileged material.
Sources: Gemini Apps Privacy Notice, https://support.google.com/gemini/answer/13594961; Gemini for Workspace FAQ, https://support.google.com/a/answer/14130944; Generative AI APIs Terms, https://ai.google.dev/gemini-api/terms (accessed February 13, 2026)
The Common Thread
Four platforms, four variations on identical language: “as is,” no warranty of security, no guarantee of confidentiality, express rights to use submitted content for model training, and in Google’s case, a direct instruction not to submit confidential information. Every platform offers enterprise or commercial tiers with stronger protections, but the consumer products that attorneys and clients reach for first provide none of the confidentiality safeguards that privilege requires. Judge Rakoff did not create new law in Heppner. He applied existing doctrine to terms that every user accepted and almost none of them read.
References
United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026) (written opinion denying privilege for AI-generated documents; oral ruling from bench Feb. 10, 2026)
United States v. Kovel, 296 F.2d 918 (2d Cir. 1961) (extending privilege to third parties reasonably necessary for attorney to render legal advice)
Upjohn Co. v. United States, 449 U.S. 383 (1981) (privilege protecting corporate attorney communications with employees for legal advice)
Concord Music Grp., Inc. v. Anthropic PBC, No. 24-cv-03811-EKL, 2025 WL 1482734 (N.D. Cal. May 23, 2025) (work product doctrine and AI-generated content)
Tremblay v. OpenAI, Inc., No. 23-cv-03223-AMO, 2024 WL 3748003 (N.D. Cal. Aug. 8, 2024) (work product doctrine and AI-generated content)
Shih v. Petal Card, Inc., No. 21-cv-07041 (S.D.N.Y. 2021) (work product protection for litigation-related communications without direct attorney involvement; distinguished and disagreed with in Heppner)
Robbins, Ira P., “Against an AI Privilege,” Harvard Journal of Law & Technology Digest (Nov. 7, 2025), https://jolt.law.harvard.edu/digest/against-an-ai-privilege
ABA Model Rules of Professional Conduct, Rules 1.1 (Comment 8), 1.4, 1.6(c) (Comment 18), 5.3
ABA Formal Opinion 477R (May 2017): Securing Communication of Protected Client Information
ABA Formal Opinion 483 (October 2018): Lawyers’ Obligations After an Electronic Data Breach or Cyberattack
ABA Formal Opinion 512 (July 2024): Generative Artificial Intelligence Tools
Florida Bar Ethics Opinion 24-1: Use of Generative Artificial Intelligence in the Practice of Law
Debevoise & Plimpton, “SDNY Rules AI-Generated Documents Are Not Protected by Privilege” (February 2026)
EDRM, “A.I. Documents Deemed Not Privileged” (February 12, 2026)
eDiscovery Today, “AI Created Documents Sent to Counsel Not Privileged, Court Rules” (February 12, 2026)
Leech Tishman, “Court Declines Privilege Protection for Client-Generated AI Documents” (February 2026)
Calvin Klein Trademark Trust v. Wachner, 198 F.R.D. 53 (S.D.N.Y. 2000) (Kovel applied to PR consultants)
Scott v. Beth Israel Medical Center, 17 Misc.3d 934 (N.Y. Sup. Ct. 2007) (no privilege in employer email systems)