Policy Guidelines

A practical framework for responsible AI use in your organisation.

Informational only — not legal advice. Vendor policies change frequently; always verify current terms directly with each vendor and consult qualified legal counsel for compliance decisions.

These guidelines distil the most important practices from enterprise AI governance frameworks, GDPR compliance programmes, and HIPAA security rules into actionable steps for organisations of any size.

Use them as a starting point and adapt to your organisation's size, risk profile, and regulatory requirements. Consult qualified legal and compliance counsel before finalising any policy.

AI Governance Framework

A governance framework defines who is responsible for AI decisions and how those decisions are made consistently.

  • 1

    Appoint a designated AI owner or AI Governance Committee responsible for approving tools, reviewing incidents, and maintaining your AI policy.

  • 2

    Create and maintain a central registry of all AI tools in use: vendor, plan tier, approved use cases, and documented risk assessment.

  • 3

    Establish a formal approval process for adopting new AI tools — require risk review before any tool is used with business data.

  • 4

    Document your risk acceptance decisions — if you choose to use a tool with known limitations, record the rationale.

  • 5

    Review your governance framework at least annually, and immediately following any significant AI incident or major vendor policy change.

Data Classification & AI Usage Rules

Not all data is equal. Define clear rules about which data types may be used with which tools.

  • 1

    Implement a four-tier data classification: Public, Internal, Confidential, and Regulated/Restricted.

  • 2

    Public data: usable with any AI tool. Internal data: business/team plan required. Confidential data: enterprise plan or on-premises only. Regulated data: only with BAA/DPA in place.

  • 3

    Create a clear, accessible reference for employees showing which tools are approved for which data classifications.

  • 4

    Require data owners to classify datasets before any AI-assisted processing begins.

  • 5

    Prohibited at all times: raw PHI, financial account numbers, SSNs, authentication credentials, unpublished M&A data, and attorney-client communications.

Vendor Management & Contracts

The contract you sign with an AI vendor is your primary legal protection. Don't rely on consumer terms.

  • 1

    Require a Data Processing Agreement (DPA) or equivalent before using any AI tool for EU personal data (GDPR) or UK personal data (UK GDPR).

  • 2

    For healthcare data: obtain a HIPAA Business Associate Agreement (BAA) from every vendor that may process PHI.

  • 3

    For financial services: verify GLBA compliance and document your vendor due diligence.

  • 4

    Contract checklist: data retention limits, deletion rights, training opt-out, sub-processor list, security incident notification (< 72 hours for GDPR), and data residency.

  • 5

    Ensure contracts explicitly state that your data will not be used to train the vendor's models without your written consent.

  • 6

    Review and renegotiate contracts annually — or immediately when a vendor updates its terms.

Employee AI Usage Policy

Clear, written policies prevent accidental data exposure and set consistent expectations.

  • 1

    Publish a written AI Usage Policy covering: approved tools, prohibited data types, how to report incidents, and disciplinary consequences for policy violations.

  • 2

    Require all employees to complete AI policy training before using any AI tool for business purposes.

  • 3

    Establish a "default deny" posture: employees must use only approved tools — unapproved tools require a formal exception process.

  • 4

    Provide practical guidance: how to anonymise inputs, when to use AI vs. when to seek human expertise, and how to verify AI-generated outputs.

  • 5

    Create a non-punitive incident reporting mechanism — early reporting of accidental data exposure significantly reduces harm.

  • 6

    Update the policy within 30 days of any material change to vendor terms or regulations.

Technical Security Controls

Policies must be backed by technical controls to be effective.

  • 1

    Deploy enterprise AI platforms with SSO (single sign-on) and MFA (multi-factor authentication) — never consumer accounts with individual credentials.

  • 2

    Use SCIM for automated user provisioning and de-provisioning to ensure access is revoked promptly when employees leave.

  • 3

    Implement RBAC (role-based access controls) to restrict access to sensitive AI workflows to only those who need it.

  • 4

    Enable audit logging for all AI platform usage and integrate logs into your SIEM for centralised monitoring.

  • 5

    Deploy DLP (data loss prevention) rules to detect when sensitive data patterns (e.g., SSNs, credit card numbers) are sent to AI tools.

  • 6

    For high-risk workflows, consider browser isolation or dedicated endpoints to prevent data leakage.

  • 7

    Test your controls regularly — run tabletop exercises and simulate accidental disclosure scenarios.

Incident Response

Prepare for the inevitable — a data exposure involving AI tools requires a distinct response playbook.

  • 1

    Create a specific AI incident response playbook covering: accidental PHI input, prompt injection attacks, unauthorised tool use, and vendor data breaches.

  • 2

    Define clear escalation paths: who is notified first, who makes the call on regulatory notification, and who communicates with affected individuals.

  • 3

    GDPR/UK GDPR: notification to the supervisory authority within 72 hours of becoming aware of a breach.

  • 4

    HIPAA: notification to HHS and affected individuals within 60 days of discovery.

  • 5

    After any incident: conduct a root-cause analysis, update controls, and circulate learnings to prevent recurrence.

  • 6

    Run a simulation exercise at least annually to validate your playbook before you need it.

Get a personalised assessment

See how your specific setup measures up against these guidelines.

Informational only — not legal advice. Vendor policies change frequently; always verify current terms directly with each vendor and consult qualified legal counsel for compliance decisions.