An Acceptable Use Policy (AUP) is the cornerstone of enterprise Claude governance. Without one, your employees don't know what they're allowed to do, your compliance team can't evidence controls, and your security team can't enforce boundaries. With one, you have a clear foundation for everything else.
This guide walks through what an effective Claude AUP must contain, the reasoning behind each section, and a template you can adapt for your organisation — based on the policies we've developed and refined across 200+ enterprise Claude deployments.
Want a Custom AUP for Your Organisation?
We develop Claude governance documentation — AUP, data classification guide, training materials — tailored to your regulatory environment and use case catalogue in 2 weeks.
Request Free Assessment →Why Every Enterprise Needs a Claude AUP Before Wide Deployment
A surprising number of organisations deploy Claude to hundreds of employees without any formal usage policy. The typical result: confidential client data submitted to personal Claude.ai accounts, employees using Claude for purposes that create legal risk, no mechanism to update guidance as capabilities evolve, and no audit evidence if a compliance review asks what controls were in place.
An AUP addresses all of these. It takes 1-2 weeks to develop properly and reduces compliance risk dramatically. It also signals to your employees that leadership has thought through Claude governance — which increases responsible usage.
The Seven Essential Sections of a Claude AUP
1. Purpose and Scope
This policy governs the use of Claude (by Anthropic) and any other AI language model tools approved by [Organisation Name] ("the Company"). It applies to all employees, contractors, and third parties with access to Company-approved AI tools. Use of personal or non-approved AI tools for Company business is governed by the Company's general IT Acceptable Use Policy.
The scope section must explicitly address the question of personal accounts. Many employees assume they can use their personal Claude.ai account for work tasks because it seems harmless. Make it explicit: approved tools are listed, personal accounts are not approved for work use involving any company, client, or confidential data.
2. Approved Use Cases
Approved uses of Claude include: drafting and editing professional communications and documents; summarising and analysing publicly available or Company-approved internal information; generating code, templates, and structured outputs; research synthesis and knowledge discovery; and other tasks approved by your line manager and consistent with your job responsibilities. All Claude outputs must be reviewed and verified by the responsible employee before use in any work product.
The final sentence — requiring human review — is critical. It establishes that Claude outputs are drafts requiring professional judgement, not finished work. This is both good practice and important for limiting liability.
3. Prohibited Data Categories
The following categories of data must NOT be submitted to Claude without explicit written approval from the Chief Information Security Officer: personally identifiable information (PII) about employees, customers, or third parties; protected health information (PHI); payment card data (PCI); non-public financial information; attorney-client privileged communications; trade secrets or confidential intellectual property; credentials, API keys, or access tokens; and any data classified as Confidential or above under the Company's Data Classification Policy.
White Paper: Claude Governance Framework
Our complete governance framework covers AUP, data classification, training curriculum, audit controls, and incident response for enterprise Claude deployments.
Download Free →4. Prohibited Actions
The following actions are prohibited: using non-Company-approved AI accounts for work involving Company data; sharing Company AI tool credentials with others; attempting to circumvent usage monitoring or access controls; submitting data categories listed in Section 3 without approval; using Claude to generate content that could constitute harassment, discrimination, or legal violation; representing AI-generated content as original human work in contexts where this distinction matters; and using Claude for tasks expressly prohibited by applicable law or regulation.
5. Quality Assurance Requirements
This section specifies review requirements by use case type. Different outputs carry different risk if incorrect:
- Low risk (internal drafts, brainstorming, research summaries) — standard employee review required
- Medium risk (client communications, reports, code for production) — manager or peer review required
- High risk (legal documents, financial statements, regulated disclosures) — qualified professional sign-off required, clearly marked as AI-assisted
6. Confidentiality and Intellectual Property
Employees must treat Claude outputs with the same confidentiality obligations as other Company work product. Outputs generated in the course of employment are the property of the Company. Employees must not use Claude to reproduce substantial portions of third-party copyrighted content. Employees are responsible for ensuring Claude outputs do not infringe third-party intellectual property rights before use.
7. Consequences and Review Process
The AUP must specify consequences for violations (typically mirroring your broader IT policy), an exception request process for use cases that require approved deviations, and a review cadence (we recommend every 6 months, as Claude capabilities and regulations evolve rapidly).
Implementing the AUP: Training and Acknowledgement
An AUP has no effect if employees haven't read and understood it. Effective implementation requires:
- Policy acknowledgement at onboarding — employees acknowledge the AUP before receiving Claude access
- Annual re-acknowledgement — all Claude users acknowledge the current version annually
- Manager briefing — line managers understand the policy and are accountable for their team's compliance
- Training integration — AUP content is covered in Claude onboarding training, not as a separate checkbox exercise
- Plain language summary — a one-page "What you need to know" card alongside the formal policy
Keeping Your AUP Current as Claude Evolves
Claude's capabilities are expanding rapidly. Features like computer use, expanded context windows, and MCP integrations create new use case categories that may not be covered by your initial AUP. Build in a semi-annual review cycle and assign a named owner (typically your AI governance lead or CISO) to trigger reviews when significant new capabilities are released.
The most effective AUP maintenance approach we've seen: a standing "AI Governance" meeting monthly that reviews new use case requests, updates the approved use case register, and flags AUP amendments needed. This keeps governance lightweight and responsive rather than turning into an annual compliance exercise.