[Valid RSS]

AI and the loss of privilege: US v Heppner

Amit Sharma
April 2, 2026
10 min
Back To Blogs

On February 10, 2026, Judge Jed Rakoff of the Southern District of New York issued a ruling in *United States v. Heppner* that should fundamentally change how every lawyer thinks about AI and confidentiality.

The defendant had used a consumer AI tool to generate documents related to a pending criminal investigation. The government moved to compel production, arguing the documents weren't privileged. Judge Rakoff agreed, ruling that neither attorney-client privilege nor the work product doctrine protected the AI-generated materials.

The reasoning was surgical.

The defendant had "disclosed it to a third-party, in effect, AI, which had an express provision that what was submitted was not confidential."

The AI tool's own privacy policy - which permitted the use of inputs for model training and allowed disclosure to third parties - was the basis for finding that the confidentiality requirement for privilege had been destroyed.

This is a case of first impression on what Judge Rakoff called a "nationwide" question: whether communications with a publicly available AI platform during a pending legal matter are protected by privilege.

The answer, as of February 2026: no.

"Just Opt Out of Training. Problem Solved."

Within days of the Heppner ruling, the most common response was: this is a consumer AI problem, not an enterprise one. Anthropic's enterprise and API tiers don't train on customer data - it's the default, not even an opt-out. OpenAI Enterprise is the same. Even on consumer plans, you can disable training with a checkbox.

And the second pushback followed close behind: if disclosing information to an AI tool waives privilege, then every lawyer who's ever used Gmail, Google Drive, or iCloud to communicate with a client has the same problem. The Harvard Law Review noted that Judge Rakoff "assumed sub silentio that Claude was more like a non-attorney human than a tool." Jennifer Ellis put it more bluntly: "If I type my thoughts into a software platform run by a third party, that is not privileged. Nothing about that analysis changes because the platform is labeled 'AI.'"

These are serious arguments. And on the training point specifically, they're right, the "your data trains the model" fear is largely a solved problem for anyone using enterprise AI tiers or checking the opt-out box.

But I think the training argument was always a red herring.

The real privilege question was never "will they train on my data?" It's more fundamental than that, and it applies even when training is completely off the table.

What "No Training" Doesn't Fix

When you opt out of training, or use an enterprise tier that never trains on your data, here's what still happens:

  1. Your text still reaches the provider's servers in readable form. The AI needs to read your contract clause to analyze it. Unlike encrypted cloud storage where files can sit in an unprocessed state, the LLM must ingest the full text of your input to generate a response. The content is decrypted, tokenized, and processed in the provider's infrastructure.
  2. Safety monitoring still applies. Every major AI provider, including enterprise tiers, runs safety classifiers on inputs and outputs. Anthropic retains flagged content and safety classifier results even under zero-data-retention agreements. These aren't optional add-ons; they're built into the architecture for compliance with their acceptable use policies. The provider's employees and systems can access flagged content for review.
  3. Retention carve-outs exist in every agreement. Even the most restrictive enterprise agreements include exceptions for legal compliance, abuse detection, and safety review. Anthropic's zero-data-retention offering still retains "User Safety classifier results in order to enforce their Usage Policy." Flagged conversations can be retained for up to two years. These carve-outs mean "we don't train on your data" is not the same as "we never see your data."

Similar carve-outs exist in cloud storage agreements too. Google can access your Google Docs for abuse detection. Microsoft's enterprise agreements include similar exceptions. And courts have been fine with cloud storage and privilege for 15 years, provided reasonable enterprise agreements are in place.

So why should AI be different? It May Not Be, But We Don't Know Yet

The "Google Docs has the same problem" argument has real force. And if I'm being intellectually honest, there's a plausible future where courts treat enterprise AI tools exactly like cloud storage, as passive technology intermediaries whose role doesn't affect the privilege analysis, provided proper contractual safeguards exist.

But that future doesn't exist yet. And the present is dangerously uncertain.

Here's the critical difference: cloud storage has decades of case law establishing that privilege survives third-party hosting with reasonable enterprise agreements. Courts have addressed the question, developed frameworks, and created predictable outcomes. A lawyer using Google Workspace with a proper enterprise agreement can point to years of precedent supporting privilege preservation.

AI has two rulings from the same week in February 2026 that reached opposite conclusions.

Warner v. Gilbarco: The Same Week, the Opposite Result

On February 17, 2026, the same week Judge Rakoff issued his written Heppner opinion, Magistrate Judge Anthony Patti in *Warner v. Gilbarco* (E.D. Mich.) reached the opposite conclusion. A pro se litigant had used ChatGPT to help draft legal arguments. Judge Patti held that the AI-assisted work product was *protected*, treating the AI as a tool (like a word processor) rather than a third party capable of receiving confidential information.

The National Law Review published an analysis titled "Same Week, Different Frameworks" and concluded that the two courts adopted "incompatible frameworks for how AI tools relate to privilege and work product doctrine." Both courts may be correct on their specific facts:

- **Heppner** involved a criminal defendant using a consumer AI tool *without counsel's direction*, where the tool's privacy policy permitted training and third-party disclosure. No attorney-client relationship was mediated through the AI.

- **Warner** involved a pro se litigant who functioned as her own counsel, using AI as a drafting assistant. The court didn't engage with ChatGPT's terms of service at all.

The key variables that determined the different outcomes: (1) whether an attorney directed the AI use, (2) whether the court characterized AI as a "tool" or a "third party," and (3) whether the court examined the AI provider's privacy policy.

An enterprise AI agreement with no-training clauses and a Data Protection Addendum addresses several of these factors. The National Law Review article specifically called enterprise-grade terms "the two strongest available defenses." But it also noted: "An enterprise license without documentation infrastructure is effectively compliance theater, not privilege protection."

The Risk Calculus

Let's frame this as a practical question rather than a doctrinal one.

If you're a law firm using an enterprise AI tool with no-training agreements, strong contractual confidentiality protections, and attorney-directed workflows, you probably have a defensible position on privilege. Reasonable lawyers can disagree, but the enterprise safeguards meaningfully reduce risk.

But "defensible" is not the same as "settled." Consider the position you're actually in:

  • You're asking to be the test case. No court has ruled on whether enterprise AI agreements with no-training provisions adequately preserve privilege. Heppner involved consumer use. Warner didn't examine the provider's terms at all. The enterprise scenario is genuinely untested. Are you comfortable having your client's privilege depend on how a court in your jurisdiction interprets your vendor's Data Protection Addendum for the first time?
  • Opposing counsel knows the arguments. An aggressive litigant will point to the safety monitoring carve-outs, the retention exceptions, the possibility of human review, and argue that these constitute third-party access that destroys confidentiality. Whether they'd win is uncertain. Whether they'd force you to litigate the question in the middle of your case is not.
  • The burden of "reasonable efforts" is forward-looking. ABA Opinion 512 requires "reasonable efforts to prevent inadvertent or unauthorized disclosure." As the case law develops and the risks become better understood, what counts as "reasonable" will evolve. The same enterprise agreement that satisfies the standard today may not satisfy it next year, as courts and bar associations refine their expectations.

The IP Exposure Nobody's Talking About

The privilege debate focuses almost entirely on client data - party names, deal terms, dollar amounts. But there's a second category of confidential information flowing into AI tools that receives almost no attention: your firm's own intellectual property.

When a firm configures an AI tool for contract review, it doesn't just feed in the contract. It feeds in the *framework for evaluating that contract*: playbooks, clause libraries, negotiation positions, risk thresholds, fallback language, escalation criteria. These are the artifacts of institutional knowledge - what a senior partner knows about how to negotiate an indemnification cap, what position to take on limitation of liability, when to push back on a governing law clause and when to concede.

This institutional knowledge is, for most firms, their primary competitive asset. It's what clients pay for. It's what takes decades to develop. And it's being uploaded wholesale into AI systems as system prompts, custom instructions, and configuration files.

Consider what's actually being transmitted:

  • Playbooks that encode the firm's risk appetite: "For deals under $10M, accept uncapped indemnification for IP infringement and fraud; for deals over $10M, cap at 2x contract value with carve-outs."
  • Clause libraries containing the firm's preferred language - the exact formulations that senior partners have refined over years of negotiation.
  • Fallback positions that reveal what the firm will ultimately accept when pushed: "If counterparty rejects mutual termination for convenience, fallback to termination for convenience with 90-day notice plus wind-down period."
  • Risk scoring frameworks that expose the firm's priorities: which issues they treat as red lines, which they consider negotiable, where they draw the line between acceptable and unacceptable risk.

This isn't client data. It's the firm's trade secret. And it's subject to the same retention carve-outs, safety monitoring, and third-party access provisions as any other input to the AI system.

Pseudonymization doesn't solve this.

You can replace "Acme Corp" with "PARTY_A" in a contract clause, but you can't pseudonymize a playbook that says "our standard position on consequential damages exclusions is..."  because the position itself *is* the confidential information. The content is the IP, and the AI needs to read it in full to apply it.

There's also a technical vulnerability that compounds the risk: system prompt leakage. Security researchers have extensively documented techniques for extracting system prompts from LLMs through adversarial inputs. A counterparty, or anyone with access to the AI tool, could potentially craft prompts that cause the system to reveal the firm's playbook instructions, negotiation parameters, or fallback positions. The system prompt is treated as confidential by the AI provider, but it is not architecturally protected from extraction.

This creates an uncomfortable question: if your firm's playbooks, clause libraries, and negotiation strategies are sitting in an AI provider's infrastructure - subject to safety monitoring, retention carve-outs, and potential prompt extraction - are you protecting client data while inadvertently exposing the institutional knowledge that is your firm's competitive moat?

The firms that think most carefully about AI confidentiality are asking not just "is my client's data protected?" but "is my firm's knowledge protected?" These are different questions with different answers, and the second one is getting almost no attention.

This Isn't Theoretical. Look at the last Week of Mar 26.

Everything we've argued so far could be dismissed as hypothetical risk. "Sure, these attacks are *possible*, but our vendor has strong security practices. The chance of an actual breach is low."

Then look at what happened in the last week of March 2026 - not to fringe startups, but to the AI infrastructure that enterprise legal teams rely on.

  • March 31: Anthropic's own source code leaked. A packaging error in npm version 2.1.88 of Claude Code shipped a 59.8 MB source map file that exposed approximately 512,000 lines of internal TypeScript = including system prompt architectures, feature flags, telemetry pipelines, and internal tooling configurations. Security researcher Chaofan Shou posted the direct link to Anthropic's own Cloudflare R2 storage bucket. Mirrored repositories accumulated tens of thousands of GitHub stars before DMCA takedowns hit. Anthropic confirmed no customer data or model weights were exposed. But the incident demonstrated something more fundamental: **system prompts and internal instructions — the same category of data as your playbooks and custom configurations - were fully exposed through a human packaging error.** Not a sophisticated attack. Not a zero-day exploit. A build script mistake.
  • March 25-31: LiteLLM supply chain attack.** Hackers compromised the PyPI publishing credentials for LiteLLM - a widely-used open-source library that proxies requests to multiple LLM providers (OpenAI, Anthropic, Azure, and others). Malicious code injected into versions 1.82.7 and 1.82.8 harvested SSH keys, .env files, cloud credentials, and AI API keys from every system that installed the compromised package. Mercor, a $10 billion AI startup that contracts with OpenAI and Anthropic, lost 4 terabytes of data - including 939GB of source code, a 211GB user database, and 3TB of storage buckets containing video interviews and identity verification passports. Thousands of other companies were affected. The attack vector wasn't the AI provider - it was a library in the supply chain *between* the customer and the provider.
  • Mid-March: 3.7 million AI chat logs exposed.** Security researcher Jeremy Fowler discovered three publicly exposed databases from Sears containing 3.7 million customer service chatbot logs and 1.4 million audio files with transcripts - including names, phone numbers, and home addresses. The databases were simply left unsecured.

Three incidents. Three different attack surfaces. Three reminders that the security of AI infrastructure depends not just on the provider's policies, but on an entire chain of dependencies - npm packages, PyPI libraries, cloud storage configurations, build pipelines, and human operators - any one of which can fail.

The Claude Code leak is particularly relevant to the playbook argument. If Anthropic's own internal system prompts can leak through a packaging error, what about the system prompts that encode your firm's negotiation playbook? The same category of data, instructions that tell the AI how to behave - was exposed not through a policy failure, but through operational error. No amount of contractual protection prevents a misconfigured build script.

And the LiteLLM compromise goes further. Even if you trust your AI provider completely, your data flows through middleware, proxy libraries, and infrastructure dependencies that your vendor doesn't control. A compromised library in the request pipeline can intercept everything - your prompts, your playbook instructions, your contract text - before it ever reaches the AI provider's secured infrastructure. Your enterprise agreement with Anthropic or OpenAI doesn't cover what happens in the libraries between you and them.

These aren't edge cases. They happened in a single week, to companies with sophisticated security teams and enterprise-grade infrastructure. The question isn't whether your AI provider's policies are good. The question is whether the entire operational chain, from your keyboard to the model and back, is immune to human error, supply chain attacks, and infrastructure misconfiguration.

The answer, as of this week, is clearly no.

The Position That Makes the Question Disappear

There's an alternative that doesn't require you to navigate any of this uncertainty.

If the privileged content never reaches the AI provider in the first place: if party names, deal terms, dollar amounts, and identifying details are replaced with consistent pseudonyms before anything leaves your environment, then the privilege analysis never arises. There's no third-party disclosure to evaluate. No privacy policy to parse. No enterprise agreement to defend. No framework conflict between Heppner and Warner to navigate.

Instead of asking "does our enterprise AI agreement adequately protect privilege?" GC's should be asking "why are we sending privileged text to a third party at all, when we don't have to?"

This isn't about fear. Enterprise AI agreements are a meaningful step forward from consumer use, and firms using them are acting responsibly. But there's a difference between a defensible position and an unassailable one. In a profession built on reducing risk for clients, the stronger position is the one where the question simply doesn't come up.

The Wrong Conversation

Walk into any legal technology conference in 2026 and the AI privacy conversation follows a predictable script.

Vendor: "We're SOC2 Type II certified, ISO 27001 compliant, and all data is encrypted at rest and in transit with AES-256."

Buyer: "Great. We're comfortable."

This exchange misses the point entirely. SOC2 and ISO 27001 certify that a vendor's infrastructure follows security best practices - access controls, audit logging, incident response. They protect the pipe. But they say nothing about what travels through it.

The Heppner ruling isn't about infrastructure security. It's about a more fundamental question: did privileged information leave the client's control and reach a third party?

If the answer is yes, and for the vast majority of legal AI tools on the market today, it is, then the security certification of the receiving party is legally irrelevant to the privilege analysis. You've made a third-party disclosure. The privilege question turns on whether that disclosure was protected by a reasonable expectation of confidentiality, and whether the third party's terms support that expectation.

Most AI providers' terms don't.

What ABA Opinion 512 Actually Says

The American Bar Association anticipated this issue. In July 2024, the ABA Committee on Ethics and Professional Responsibility issued Formal Opinion 512, titled "Generative AI Tools."

The opinion applies existing Model Rules - particularly Rule 1.6 (Confidentiality of Information) - to lawyers' use of generative AI. The core requirement: lawyers must make "reasonable efforts to prevent the inadvertent or unauthorized disclosure" of client information when using AI tools.

The opinion spells out what "reasonable efforts" means in this context:

  1. Understand whether client data is used to train models. If your inputs become part of the model's training data, they're no longer confidential in any meaningful sense.
  2. Understand whether prompts and inputs are stored or accessible to third parties. Storage by the AI provider constitutes a third-party disclosure, regardless of how that storage is secured.
  3. Evaluate the sensitivity of the information involved. Higher sensitivity demands stronger protections - not just better encryption, but architectural guarantees that sensitive data doesn't leave your environment.
  4. Read the terms of service. The privacy policy isn't boilerplate. As Judge Rakoff demonstrated, it's the document a court will use to evaluate whether your expectation of confidentiality was reasonable.

Most commentary on Opinion 512 focused on the first point - training data. But the second and third points are far more consequential. They draw a distinction between securing data that a third party holds (encryption, access controls) and ensuring the third party never holds it at all.

The Heppner ruling enforced exactly that distinction.

The Numbers Behind the Anxiety

The legal profession knows this is a problem. The data is unambiguous:

  • 56% of corporate counsel believe AI tools could compromise attorney-client privilege, with 49% citing generative AI specifically as the highest risk (Association of Corporate Counsel Survey).
  • 46% of legal professionals rank data privacy compliance as their top AI concern, and 43% are specifically worried about client confidentiality (Wolters Kluwer 2026 Future Ready Lawyer Survey).
  • 60% of in-house legal teams don't even know whether their outside counsel uses AI on their matters (ACC/Everlaw GenAI Survey, cited by Everlaw CLO Gloria Lee). This isn't a technology problem, it's a transparency crisis between firms and clients.
  • 80% of respondents in the Wolters Kluwer survey expect information security challenges to significantly impact their organizations in the next three years, but only 31% feel very prepared to address them.

There's a striking gap between awareness and action. Lawyers know confidentiality is at risk. They don't know what the architecturally sound alternative looks like.

Three Approaches to Confidential Text in Legal AI

When a legal AI tool needs to process contract text that contains privileged or confidential information, there are fundamentally three architectural approaches:

1. Full-Text Transmission (The Default)

Most legal AI tools send the full text of your document - party names, deal terms, dollar amounts, matter details - to an LLM provider's API. The text is processed, and the response is returned.

Security certifications protect this text in transit and at rest. Enterprise agreements may include provisions against using the text for model training. But the text still reaches the AI provider's servers in readable form. A human or automated system at the provider could, in theory, access it.

This is the architecture Judge Rakoff ruled against. The provider's privacy policy permitted third-party disclosure. The privilege was waived.

Even with enterprise agreements that are more restrictive than consumer terms, the fundamental structure is the same: privileged text leaves your environment and reaches a third party. The legal defensibility of this approach now has a court ruling working against it.

2. Redaction (The Blunt Instrument)

Some tools attempt to solve this by redacting sensitive information before sending text to the AI. Party names are removed. Dollar amounts are stripped. Identifying details are blanked out.

The problem is that redaction is inherently lossy. Consider this clause:

> "[REDACTED] shall indemnify [REDACTED] for all claims arising from [REDACTED]'s breach of the representations set forth in Section [REDACTED], up to a maximum liability of [REDACTED]."

The AI has almost nothing to work with. It can't evaluate whether the indemnification is mutual or one-sided. It can't assess whether the liability cap is reasonable relative to the deal size. It can't identify which party bears which risk.

Redaction preserves confidentiality at the cost of destroying the semantic context that makes AI analysis valuable. You've protected the text by making it useless.

3. Pseudonymization (The Structural Solution)

The third approach replaces sensitive entities with consistent, meaningful placeholders before the text leaves your environment:

> "PARTY_A shall indemnify PARTY_B for all claims arising from PARTY_B's breach of the representations set forth in Section 4.2, up to a maximum liability of AMOUNT_1."

The AI can now reason about the full clause structure. It understands the indemnification is one-sided (PARTY_A indemnifies PARTY_B). It can evaluate the clause against your playbook's standards. It can suggest a redline that makes the indemnification mutual or adjusts the liability cap.

But the actual party names, the real dollar amounts, the specific section references that identify the deal - none of that ever left your environment. A mapping table (PARTY_A = Acme Corp, AMOUNT_1 = $5,000,000) is maintained locally. When the AI's response comes back with placeholder references, you remap them to the real entities on your side.

The result: the AI gets full semantic context. Your privileged information stays with you. And if a court ever examines what was transmitted to the AI provider, it finds pseudonymized text that reveals nothing about the parties, the deal, or the matter.

This is the architecture that most fully satisfies ABA Opinion 512's "reasonable efforts" standard. There's nothing privileged to disclose - because nothing privileged was ever transmitted.

Beyond Pseudonymization: The Detection Stack

Effective pseudonymization requires more than simple find-and-replace. Contract text contains confidential information in multiple forms:

- Named entities: Party names, individuals, organizations, locations - detectable through Named Entity Recognition (NER) models trained on legal text.

- Structured data: Dollar amounts, dates, email addresses, phone numbers, tax identification numbers - capturable through regular expressions and pattern matching.

- Organization-specific terms: Project codenames, product names, matter IDs, internal references - only identifiable through custom dictionaries maintained by each client.

A robust moderation layer combines all three detection methods. NER catches what regex misses (variations in how parties are named). Regex catches what NER misses (structured identifiers that don't look like entities). Custom dictionaries catch what neither can detect (the internal codename for a deal that appears nowhere in public data).

The result is a multi-layer detection architecture where each layer covers the gaps left by the others.

How ContractKen handles this issue?

Read our detailed approach here: https://www.contractken.com/moderation-layer

What the Market Will Demand

The Heppner ruling is a leading indicator, not an isolated event. Several forces are converging:

  • Regulatory pressure is mounting. The EU AI Act's high-risk system rules take effect August 2, 2026, with penalties of up to 35 million euros or 7% of global revenue. Legal AI tools that process privileged information will almost certainly qualify as high-risk systems requiring conformity assessments and human oversight.
  • Clients are waking up. The 60% transparency gap - in-house teams not knowing whether their firms use AI - will close. When it does, "how does your AI handle our privileged information?" will be a standard question in outside counsel guidelines. Firms that can't answer it architecturally will lose work.
  • Insurers are watching. The legal malpractice insurance market is already responding to AI risk. A privilege waiver caused by inadequate AI architecture is exactly the kind of claim that will drive coverage restrictions and premium increases.
  • Courts will follow. Heppner is a criminal case involving a pro se defendant using a consumer tool. The holding is narrow. But the reasoning, that disclosure to an AI tool with permissive terms constitutes a third-party disclosure, applies with equal force to law firms using enterprise tools whose terms permit data access by the provider.

Three Questions for GC on a Monday Morning

If you're a lawyer using AI tools on client matters, or a client whose lawyers might be, here are three questions that matter more than any security certification:

  1. Does my confidential text ever reach the AI provider's servers in readable form? Not "is it encrypted in transit." Not "is it stored securely." Does the actual text - with real party names, real deal terms, real dollar amounts - reach the provider in a form that could be read?
  2. What is the provider's legal basis for claiming that transmission doesn't constitute third-party disclosure? Enterprise agreements help. But "we won't use it for training" is not the same as "we never had it." A court applying the Heppner reasoning will look at whether the information reached the third party, not what the third party promised to do with it.
  3. Can the provider show me, architecturally, where confidentiality is preserved? Not in a sales deck. In a system architecture diagram. Where does the privileged text stop? What crosses the boundary to the AI provider? What stays with me?

If the answer to #1 is yes and the answer to #2 is "trust our enterprise agreement," the Heppner ruling just told you what a court thinks about that reasoning.

The privilege you waive today doesn't come back tomorrow.

---

Sources

  1. United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 10, 2026): [Debevoise analysis](https://www.debevoisedatablog.com/2026/02/11/district-court-rules-ai-generated-documents-are-not-protected-by-privilege/) | [Gibson Dunn analysis](https://www.gibsondunn.com/ai-privilege-waivers-sdny-rules-against-privilege-protection-for-consumer-ai-outputs/) | [Sidley analysis](https://www.sidley.com/en/insights/newsupdates/2026/02/generative-ai-and-privilege-practical-lessons-from-two-early-decisions)
  2. Warner v. Gilbarco (E.D. Mich. Feb. 17, 2026): [National Law Review: "Same Week, Different Frameworks"](https://natlawreview.com/article/same-week-different-frameworks-why-heppner-and-warner-both-got-it-right-ai)
  3. Harvard Law Review: United States v. Heppner (https://harvardlawreview.org/blog/2026/03/united-states-v-heppner/)
  4. Casepoint: "When AI Becomes Discoverable" (https://www.casepoint.com/blog/heppner-ai-attorney-client-privilege-ruling/) - enterprise agreement analysis
  5. Anthropic Privacy Center: Data Retention (https://privacy.claude.com/en/articles/10023548-how-long-do-you-store-my-data) : Training Opt-Out (https://privacy.claude.com/en/articles/10023580-is-my-data-used-for-model-training)
  6. ABA Formal Opinion 512, "Generative AI Tools" (July 29, 2024)
  7. Wolters Kluwer 2026 Future Ready Lawyer Survey (https://www.wolterskluwer.com/en/know/future-ready-lawyer-2026)
  8. ACC/Everlaw GenAI Survey (https://www.lexpert.ca/news/features/survey-reveals-in-house-counsel-concerns-over-ais-risks-to-legal-privilege-and-data-security/388738)
  9. 8am 2026 Legal Industry Report (https://www.8am.com/reports/legal-industry-report-2026/)
  10. Claude Code source leak via npm - The Hacker News (https://thehackernews.com/2026/04/claude-code-tleaked-via-npm-packaging.html) | [VentureBeat](https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know) | [The Register](https://www.theregister.com/2026/03/31/anthropic_claude_code_source_code/)
  11. Mercor/LiteLLM supply chain attack - TechCrunch (https://techcrunch.com/2026/03/31/mercor-says-it-was-hit-by-cyberattack-tied-to-compromise-of-open-source-litellm-project/) | [Neowin](https://www.neowin.net/news/mercor-says-it-is-one-of-thousands-of-companies-hit-by-the-recent-litellm-attack/)
  12. Anthropic Privacy Center: Data Retention (https://privacy.claude.com/en/articles/10023548-how-long-do-you-store-my-data)

Review & redline third party drafts, compare redlined drafts and create new drafts using your own precedents.

Built-in, industry leading 'Moderation Layer' to preserve the privacy and confidentiality of contract data.