One question comes up in nearly every enterprise Claude implementation: "What does Anthropic do with our data?" The answer shapes everything else—from which Claude deployment option you choose, to how you classify data for Claude use, to how you design your approval workflows.
The confusion is understandable. There are three different Claude deployment paths—web app, API, Claude Enterprise—and each has different data handling policies. Each one creates different compliance implications. And most organizations don't know the differences until they're deep into implementation.
In this guide, we walk through Anthropic's actual data handling policies and what they mean for your organization. We share the framework we use when helping enterprises decide which Claude deployment path fits their use cases and compliance requirements.
How Anthropic Handles Your Data: The Facts
Let's start with what happens to your data under each deployment model. This is the foundation everything else rests on.
Standard Claude API (claude.ai, API without zero-retention)
When you use Claude through the web app at claude.ai or through the standard Claude API, your conversations are stored by Anthropic. Here's exactly what that means:
- Your conversations are retained: Anthropic keeps the full conversation history—your prompts and Claude's responses—in their systems.
- Retention period: Anthropic retains this data for 30 days by default (this can be longer for certain account types). After 30 days, it's deleted.
- Data is reviewed: Anthropic states in their terms that they use conversations to improve Claude, to understand user behavior, and for safety monitoring. This means humans at Anthropic may read your conversations.
- Data is not used for training Claude: Anthropic has stated clearly that conversations are not used to retrain the Claude model. So your proprietary information isn't feeding back into Claude's weights.
- Data subject to US law: Anthropic is US-based. Your data is stored in US systems and subject to US data protection laws (including legal discovery).
For many use cases—brainstorming, research summaries, general analysis—this is perfectly fine. You're not entering sensitive data anyway. But if you're analyzing customer records, working with financial data, or handling information covered by GDPR, HIPAA, or other regulations, the standard API is not compliant.
Claude Enterprise (Zero-Retention API)
Claude Enterprise offers a fundamentally different data handling model. Here's what changes:
- Zero data retention: Anthropic does not store your prompts or Claude's responses. The conversation is processed in real-time and then deleted immediately after the response is generated.
- No human review: Because there's no stored data, Anthropic cannot review your conversations. No one at Anthropic reads your prompts.
- No improvement data: Your conversations cannot be used to improve Claude or understand user behavior. They're gone.
- Data is not logged for compliance: Importantly, this means you must handle all logging and audit trails. Anthropic has no record of what you asked Claude.
- GDPR, HIPAA, SOC2 compatible: Because there's no data retention, Claude Enterprise can be compliant with stricter regulations. (Though you still need to handle data classification, user access controls, and audit logging on your end.)
Claude Enterprise is the right choice when you're handling sensitive data—customer information, financial records, health data, proprietary algorithms. Zero-retention mode eliminates a major compliance risk.
Claude Enterprise Zero-Retention Mode Explained
Zero-retention is the critical feature that makes Claude Enterprise work for sensitive data. But it's important to understand exactly what it does—and doesn't—do.
What zero-retention actually means
When you make an API call with zero-retention enabled:
- Your prompt is transmitted to Anthropic's servers
- Claude processes your prompt and generates a response
- The response is returned to you
- Both the prompt and response are immediately deleted from Anthropic's systems
- No copy is retained for training, safety review, or any other purpose
Importantly: Your prompt is still sent to Anthropic. It travels over the internet to their servers. This is not on-premises deployment. If you need Claude to never leave your network, zero-retention won't solve that.
But for most enterprises, this is fine. The key compliance requirement is usually not "Claude never sees sensitive data" but rather "Anthropic doesn't retain, log, or use our sensitive data." Zero-retention solves that problem.
When to use zero-retention
Use zero-retention when:
- You're processing customer data (names, contact info, account information)
- You're working with financial information (transaction data, pricing, costs)
- You're handling health information (HIPAA-regulated data)
- You're working with legal privileged information
- You're processing data covered by GDPR or similar regulations
- You're analyzing proprietary algorithms or trade secrets
You may not need zero-retention for:
- General research and analysis (public information)
- Brainstorming and ideation
- Writing and content creation (non-sensitive)
- Code debugging (non-proprietary code)
- Summarization of public information
Unsure which deployment path is right for you?
Our governance assessment maps your use cases to the right Claude deployment and zero-retention policy.
Claude Projects: What Persists and For How Long
Claude Projects is Anthropic's way of creating persistent context across multiple conversations. It's useful for ongoing work, but it changes the data retention picture.
How Claude Projects handles data
When you create a Claude Project:
- Project context is stored: The system prompt, documents you upload, and configuration persist in Claude Projects storage.
- Conversations are stored: Unlike API calls, conversations within projects are stored on Anthropic's servers.
- Retention period: Projects are retained indefinitely (until you delete them). Conversations within projects are retained as long as the project exists.
- Data review: Just like standard API, Anthropic may review stored conversations for safety and improvement purposes.
- Zero-retention not available: Claude Projects does not support zero-retention mode. All data is retained.
This is important: Do not use Claude Projects for sensitive data under the standard API. Projects data is retained indefinitely, which creates compliance risk.
Claude Projects makes sense for:
- Long-running research projects (using public information)
- Persistent knowledge bases (non-sensitive information)
- Ongoing client work (with public/internal data)
- Document analysis of non-sensitive documents
If you need project-like persistence with sensitive data, use Claude Enterprise's API with zero-retention and build your own context management system on top of it. (Yes, it's more work. But it's the compliant way to do it.)
Data Classification Rules for Claude Inputs
Understanding Anthropic's policies is step one. Implementing them in your organization is step two. That requires clear data classification rules.
Four-tier classification for Claude
Use this framework to decide which data goes where:
| Data Type | Examples | Recommended Claude Path | Details |
|---|---|---|---|
| Public | Published research, marketing materials, blog posts | Any Claude option | No restrictions. Can be processed with standard API or Projects. |
| Internal | Internal documentation, team processes, strategy ideas | Standard API with caution | Fine for Claude API. Avoid Projects for sensitive internal data. Can be shared in conversations. |
| Sensitive | Customer data, financial info, proprietary methods | Claude Enterprise (zero-retention) | Requires zero-retention mode. Never use standard API or Projects. |
| Restricted | Legal privileged info, government classified data, PII | Do not use Claude | Some data should not go to Claude regardless of deployment mode. |
The classification should be part of your approval process. Before a user can send data to Claude, they answer: "What data classification is this?" If the answer is "Sensitive," they must use Claude Enterprise zero-retention. If it's "Restricted," they can't use Claude at all.
AI Compliance: SOC2, HIPAA & GDPR
We've published our complete compliance guide covering how to make Claude work with SOC2, HIPAA, GDPR, and other major regulatory frameworks. Includes data handling policies, audit requirements, and vendor assessment templates.
Read the Compliance Guide →Building Your Internal Claude Data Policy
Once you understand Anthropic's policies and your data classification, you're ready to write your internal Claude data policy.
Policy checklist
Your internal policy should cover:
- Data classification rules: Which data types require which Claude deployment? Who decides if data is "sensitive"?
- Approved Claude deployments: Are you using standard API, Claude Enterprise, Projects? For what purposes?
- Retention in Claude Projects: If using Projects, how long do you retain them? Who owns deletion?
- User consent: Do users need to acknowledge they understand data is retained/not retained before using Claude?
- Audit logging: Do you log which users send what types of data to Claude? (You should, especially for sensitive data.)
- Escalation path: If someone's unsure whether data is safe to share with Claude, who do they ask?
- GDPR/HIPAA specific rules: If you're regulated, what additional controls apply?
- Third-party integrations: If Claude connects to other systems via API or MCP, what data can flow through those integrations?
A sample policy section
Here's how this might look in your actual policy document:
This simple policy creates clear rules without being overly restrictive. Users know what they can and can't do. You have a record if something goes wrong.
Frequently Asked Questions
Does Anthropic train on our Claude prompts?
No. Anthropic has explicitly stated that they do not use your conversations to retrain the Claude model. Your prompts are not fed back into Claude's training. However, conversations may be reviewed by Anthropic staff for safety and quality monitoring—unless you use Claude Enterprise zero-retention mode, in which case they're deleted before any review can happen.
What is zero-retention mode and when do we need it?
Zero-retention mode (available only with Claude Enterprise) ensures that your prompts and responses are not stored by Anthropic. They're processed and immediately deleted. Use it whenever you're processing sensitive data: customer information, financial records, health data, proprietary information, or anything covered by GDPR/HIPAA. For public and general internal data, standard API is fine.
Can we use Claude for customer data under GDPR?
Only with Claude Enterprise zero-retention mode and proper data processing agreements. GDPR requires that personal data not be retained longer than necessary. Zero-retention satisfies this because data isn't retained at all. But you still need a Data Processing Agreement (DPA) with Anthropic, and you need to ensure you have customer consent to use Claude. The DPA and zero-retention together can make Claude GDPR-compliant, but standard API cannot.
How long does Claude Projects retain our context?
Indefinitely. Projects are retained until you explicitly delete them. Conversations within projects are stored as long as the project exists. This is why Claude Projects should not be used for sensitive data—you'd need to delete the entire project to ensure the sensitive data is gone, and Projects don't support zero-retention anyway. For persistent context with sensitive data, use Claude Enterprise API with your own context management system.