Understanding What Claude Actually Receives
When an employee uses Claude, three things happen that have data privacy implications. First, the content they type (the prompt) travels over an encrypted connection to Anthropic's servers. Second, Claude processes that content to generate a response. Third, depending on your deployment method and settings, some or all of this exchange may be logged.
Understanding exactly what persists where — and for how long — is the foundation of sound Claude data governance. The answer differs significantly based on how you're deploying Claude: via the Claude.ai web interface, via the Claude API directly, or via an enterprise integration built on the API.
Claude.ai (Web Interface)
The standard Claude.ai interface retains your conversation history in your account. Anthropic's default policy allows conversations to be used for model improvement, though enterprise accounts can opt out. Conversation history is accessible until you delete it. This is appropriate for general business use but may not be suitable for highly confidential information.
Claude API (Direct Integration)
The Claude API is stateless by default — Anthropic does not retain conversation content after the API call completes, except for limited-duration abuse monitoring. Enterprise customers can configure their API calls to explicitly disable even this retention. This is the appropriate channel for workflows involving sensitive business data.
Claude.ai Teams & Enterprise Plans
The Teams and Enterprise plans include training data opt-out, ensuring your conversations are never used to train future models. They also include enhanced administrative controls, audit logging, and the ability to configure retention settings at the organization level.
Data Retention: The Complete Picture
The most common misconception we encounter in enterprise assessments is that "using the API means no data is stored." This was approximately true in early 2024 but is now more nuanced. Here's the current state:
| Deployment Method | Conversation Stored? | Training Use? | Retention Period |
|---|---|---|---|
| Claude.ai (free) | Yes, in account | Default: Yes | Until deleted |
| Claude.ai Pro | Yes, in account | Default: Yes (opt-out available) | Until deleted |
| Claude.ai Teams | Yes, admin-controlled | No (opted out) | Configurable |
| Claude API (standard) | No persistent storage | No | Session only |
| Claude API (Enterprise) | No persistent storage | No | None |
Note: Anthropic maintains abuse monitoring logs for all API calls for a limited period (typically 30 days) to detect and respond to policy violations. This is separate from conversation content storage and is standard practice across AI API providers.
Zero-Retention Mode: When to Enable It and How
Zero-retention API mode is the configuration choice that enterprises handling sensitive data should default to for appropriate use cases. It means Anthropic does not persist your prompts or responses beyond the immediate API call — there's nothing stored on Anthropic's servers to be subject to subpoena, data breach, or model training.
Enabling zero-retention does not require a special application or negotiation. For API customers, it's a configuration setting. For Claude.ai Teams customers, it's enabled by default as part of the opt-out from training data use. For Claude.ai Enterprise customers, it's configurable at the workspace level.
When to use zero-retention mode — essentially a decision matrix for your governance policy:
- Customer PII in any form → Zero-retention required
- Unpublished financial data → Zero-retention required
- Legal strategy or privileged communications → Zero-retention required, plus Legal approval
- HR data about identifiable employees → Zero-retention required
- Product roadmaps or unannounced features → Zero-retention recommended
- General research and drafting → Standard API acceptable
- Public-facing content creation → Standard API acceptable
Download Free →
GDPR and EU Data Handling
For European enterprises, the data handling questions become more complex due to GDPR's requirements around data transfers, processing agreements, and individual rights.
Data Processing Agreements
If your Claude usage involves personal data of EU residents, you need a Data Processing Agreement (DPA) with Anthropic establishing them as a data processor under your control as the data controller. Anthropic provides standard DPA templates for enterprise customers. This is non-negotiable for GDPR compliance — without a DPA, you cannot lawfully use Claude to process EU personal data.
Standard Contractual Clauses
Because Anthropic is a US company, data transfers from the EU to Anthropic's servers require a lawful transfer mechanism. The standard mechanism is Standard Contractual Clauses (SCCs), which Anthropic's enterprise DPA incorporates. Ensure your legal team reviews the SCCs and that you've documented your transfer impact assessment.
Article 22: Automated Decision-Making
If Claude outputs feed decisions about individuals — credit decisions, hiring screens, customer service resolutions — Article 22 of GDPR requires specific safeguards. You must ensure a human can override Claude-informed decisions, that individuals can request human review, and that you have appropriate legal basis for the automated processing. This isn't a reason not to use Claude; it's a reason to document your workflows clearly.
HIPAA Considerations for Healthcare Enterprises
Healthcare enterprises using Claude to process Protected Health Information (PHI) require a Business Associate Agreement (BAA) with Anthropic. Anthropic offers BAAs for enterprise customers — this is a standard agreement for HIPAA-covered entities working with technology vendors who may handle PHI.
Key points for healthcare Claude deployments:
- A BAA is required before any PHI enters Claude workflows — no exceptions
- PHI should only flow through API-based deployments covered by the BAA, not consumer Claude.ai
- Your BAA should specify minimum necessary PHI standard — only pass to Claude the PHI required to accomplish the specific task
- Audit logging of Claude API calls involving PHI should be implemented to support HIPAA audit requirements
- Staff who use Claude with PHI need HIPAA-specific training on Claude acceptable use
The BAA requirement also applies to clinical decision support tools built on Claude's API, telehealth platforms using Claude for documentation, and any system where Claude processes patient information as part of its function.
Practical Data Handling Checklist for IT & Legal Teams
Use this checklist during your Claude deployment assessment to ensure data handling is properly configured before rollout:
- ☐ Identify all Claude deployment channels (claude.ai, API, integrated tools) in use or planned
- ☐ Confirm training data opt-out is enabled for all accounts handling business data
- ☐ Map data classification tiers to Claude access levels (which tiers can enter Claude under what conditions)
- ☐ Configure zero-retention API mode for workflows involving Orange or Red classification data
- ☐ Execute DPA with Anthropic before any EU personal data is processed through Claude
- ☐ Execute BAA with Anthropic before any PHI enters Claude workflows (healthcare)
- ☐ Implement audit logging for regulated-data Claude API calls
- ☐ Train employees on data classification and Claude input controls
- ☐ Establish quarterly data handling review cadence tied to Claude model updates
- ☐ Document Claude data processing in your Data Processing Register (GDPR requirement)
For a deeper dive into compliance controls and policy templates, see our Claude Governance Framework white paper and our enterprise governance policy guide.