Compliance & Security · Pillar Guide

Claude Data Residency Guide: Where Your Data Goes and How to Stay Compliant

March 28, 2026 18 min read Compliance & Security

Data residency is the top compliance blocker for enterprise Claude deployments. Before your legal, security, or procurement team signs off on a Claude rollout, someone will ask: "Where does our data go?" The answer shapes everything — from which plan you procure to which use cases are permitted on day one.

This guide distils what we've learned across 200+ enterprise Claude deployments, covering Anthropic's infrastructure, contractual controls, and the practical steps needed to satisfy GDPR, HIPAA, FedRAMP, and sovereign data requirements.

Need a Data Residency Assessment?

Our compliance specialists map your regulatory obligations to Claude's data flows and produce a deployment-ready data processing register in 2 weeks.

Request Free Assessment →

Anthropic's Infrastructure and Data Processing Locations

Claude API requests are processed on Amazon Web Services (AWS) infrastructure. Anthropic's primary processing region is us-east-1 (Virginia, USA). This is the default for all API and Claude.ai traffic unless alternative arrangements are in place through the Enterprise plan.

Key facts about Anthropic's data infrastructure:

  • No model training on API data by default. Prompts and completions submitted via the API are not used to train Claude models unless you have explicitly opted in to a feedback programme.
  • Zero Data Retention (ZDR). Claude Enterprise customers receive ZDR by default — no prompt or response content is stored after the API call completes.
  • Encryption in transit and at rest. All data is encrypted using TLS 1.2+ in transit and AES-256 at rest.
  • SOC 2 Type II certified. Anthropic maintains SOC 2 Type II certification covering security, availability, and confidentiality.
  • HIPAA BAA available. Business Associate Agreements are available to Enterprise customers requiring HIPAA compliance.

Enterprise customers with strict data residency mandates — EU data staying in EU, Australian data staying in AU — should engage Anthropic's enterprise team directly to discuss current and roadmap regional options. Anthropic's infrastructure is expanding and regional endpoint availability changes frequently.

Mapping Claude's Data Flows for Your Compliance Register

Your Data Protection Officer or compliance team will need a complete data flow map before approving Claude for any use case involving personal data. Here is the standard data flow anatomy for a Claude API integration:

Inbound Data Flow (Prompt)

When your application sends a prompt to Claude: (1) your application constructs the API request, (2) the request travels over TLS to Anthropic's API endpoint, (3) Anthropic's inference infrastructure processes the prompt through the model, (4) the completion is returned to your application. The prompt content — including any personal data you include — is processed by Anthropic's systems during inference. With ZDR, no content is retained after the response is returned.

What Counts as Personal Data in Prompts

This is where many organisations underestimate their exposure. Common personal data categories that end up in Claude prompts include:

  • Employee names, email addresses, and job titles in HR prompts
  • Customer names and account references in support ticket summaries
  • Patient initials or identifiers in healthcare document prompts
  • Financial account references in analysis prompts
  • IP addresses or user IDs in technical log analysis

Best practice: implement PII scrubbing at the application layer before prompts are sent to Claude. Replace personal identifiers with anonymised tokens. This reduces your data residency risk surface significantly and is required in several sectors regardless of Claude's own data handling.

📋

White Paper: AI Compliance — SOC 2, HIPAA & GDPR

Our 40-page compliance guide covers contractual frameworks, technical controls, and audit preparation for regulated industries deploying Claude.

Download Free →

GDPR Requirements for Claude Deployments

For EU-based organisations — or any organisation processing EU personal data — GDPR compliance requires several specific controls when deploying Claude.

Data Processing Agreement

You must execute a Data Processing Agreement (DPA) with Anthropic before processing EU personal data via Claude. Anthropic provides a standard DPA for Enterprise customers. This DPA establishes Anthropic as a data processor acting on your instructions, includes standard contractual clauses (SCCs) for international data transfers, and defines sub-processor obligations.

Legal Basis for Processing

You need a valid GDPR legal basis for any personal data processed through Claude. Common legal bases in enterprise deployments:

  • Legitimate interests — for internal business processing such as employee productivity tools, contract analysis, or internal knowledge management
  • Contract performance — for customer-facing use cases where processing is necessary to deliver your contracted service
  • Legal obligation — for compliance monitoring, regulatory reporting, or audit support functions

Consent is rarely the appropriate basis for enterprise B2B use cases and should not be used as the primary basis unless you have a specific reason.

Data Subject Rights

Because Claude processes data transiently (with ZDR), responding to data subject access requests (DSARs) primarily concerns your own application logs, not Anthropic's retained data. Ensure your logging and data retention policies are aligned with your DSAR response obligations.

HIPAA Controls for Healthcare Claude Deployments

Healthcare organisations and their business associates must implement specific controls before using Claude to process Protected Health Information (PHI).

The foundation is a Business Associate Agreement (BAA) with Anthropic. BAAs are available on the Enterprise plan. A BAA cannot be executed on the Team or individual plans.

Minimum Necessary Standard

Under HIPAA's minimum necessary standard, you should only include the PHI actually required to accomplish the task. For example, a prompt asking Claude to summarise a clinical note should not include the patient's full name, date of birth, and social security number if only the clinical content is needed. Build PII minimisation into your prompt templates at the design stage.

Audit Logging

HIPAA requires maintaining audit logs of who accessed what PHI and when. While Anthropic's ZDR means they don't retain content, your application must maintain its own audit trail covering: which user made the request, what prompt template was used, what timestamp, and what category of PHI was present. We recommend structured logging to a SIEM from day one of any healthcare Claude deployment.

Sovereign Data and Cross-Border Transfer Restrictions

Sovereign data requirements — laws mandating that certain data categories remain within national borders — represent the most complex data residency challenge for Claude deployments. Countries with active data localisation requirements affecting enterprise AI use include Russia, China, India (for certain data categories), and increasingly the EU for public sector data.

Current Practical Guidance

For organisations in markets with strict localisation requirements:

  • Categorise before you deploy. Identify which data categories are subject to localisation. Many regulations apply to specific categories (financial transaction data, health records, government data) rather than all personal data.
  • Evaluate on-premise options. Anthropic offers limited on-premise deployment options for qualifying enterprise customers. These are complex and expensive — evaluate carefully against the actual risk of your use cases.
  • Design around the constraint. Many valuable Claude use cases involve no personal data at all — document drafting, code generation, template creation, internal knowledge Q&A on non-personal datasets. Prioritise these first while you resolve localisation questions for more sensitive use cases.
  • Engage legal counsel. Data localisation law is evolving rapidly. What is required in 2026 may differ significantly from 2027. Build flexibility into your implementation architecture.

Practical Implementation Controls

Across our 200+ enterprise deployments, the organisations that handle data residency compliance most effectively implement a consistent set of technical and organisational controls from day one.

Technical Controls

  • PII scrubbing layer — automated regex or ML-based scrubbing between your application and the Claude API
  • Data classification tagging — classify each prompt data source and enforce controls by classification level
  • Allowlist of permitted data categories — explicit list of what data types may be sent to Claude, enforced at API gateway level
  • Audit logging — structured logs capturing user, timestamp, use case, and data classification for every API call
  • Egress monitoring — DLP tools monitoring outbound API traffic for unexpected data patterns

Organisational Controls

  • Data Processing Register entry for Claude covering purpose, legal basis, data categories, and retention
  • Signed DPA with Anthropic on file with your data protection function
  • Acceptable Use Policy defining what may and may not be submitted to Claude
  • Employee training covering data classification and Claude usage rules
  • Quarterly review cycle for your Claude data flows against evolving regulations

Data residency is not a once-and-done checkbox. As your Claude use cases expand, new data categories will enter scope. Build a lightweight review process into your AI governance programme to assess new use cases before they go live.

Frequently Asked Questions

Where does Anthropic store Claude API data?
Anthropic processes Claude API requests in AWS data centres in the United States (us-east-1 by default). Enterprise customers with specific data residency requirements can work with Anthropic's enterprise team to explore regional options. Data is not retained for training unless you opt in.
Does Claude store my prompts and responses?
By default, Anthropic does not use API data for model training. Prompts and responses are processed transiently. Claude Enterprise plan includes zero data retention (ZDR) by default, meaning no conversation data is stored after the session ends. This is a key contractual control for regulated industries.
How do I meet GDPR requirements when using Claude?
To meet GDPR requirements, execute a Data Processing Agreement (DPA) with Anthropic, avoid sending EU personal data without appropriate safeguards, implement data minimisation in your prompts, document Claude as a data processor in your processing register, and ensure you have a valid legal basis for each use case.
Can regulated industries like healthcare use Claude?
Yes. Healthcare organisations can use Claude under HIPAA by executing a Business Associate Agreement (BAA) with Anthropic, available on the Enterprise plan. This requires implementing appropriate safeguards including PII minimisation, audit logging, and restricting Claude access to authorised personnel with a need to process PHI.

Get Your Data Residency Assessment

We map your regulatory obligations to Claude's data flows and produce a deployment-ready compliance register in 2 weeks.

Request Free Assessment →

The Claude Bulletin

Weekly implementation insights, compliance updates, and deployment best practices from the field.