Your employees are already using ChatGPT with company data. You know it. I know it. The question stopped being “should we allow it?” about two years ago.
The real question is whether your current setup is compliant. And for most organizations, the answer is: not yet.
That’s not a crisis. It’s a gap. Let’s close it.
Shadow AI Is a GDPR Problem, Not Just an IT Problem
Shadow AI is what happens when employees sign up for free ChatGPT accounts and paste in customer emails, contract clauses, internal reports. No malice involved. They’re just trying to work faster.
But under GDPR, the moment personal data enters a third-party system without proper contractual safeguards, you have a compliance issue. Not a theoretical one. The Italian DPA (Garante) fined OpenAI EUR 15 million in December 2024. Regulators are watching this space closely.
The fix isn’t banning ChatGPT. It’s deploying it correctly.
Which ChatGPT License Is GDPR Compliant? The Tier Breakdown
Not all ChatGPT licenses are equal under GDPR. The critical factor is whether OpenAI offers a Data Processing Agreement (DPA) for your tier. Without one, you don’t have the contractual basis required by GDPR Article 28.
Here’s the breakdown:
| Tier | DPA Available | Training on Your Data |
|---|---|---|
| Free | No | Yes (default) |
| Plus / Pro | No | Yes (opt-out available) |
| Team | Yes | No (default) |
| Business | Yes | No (default) |
| Enterprise | Yes | No (default) |
As of March 2026. OpenAI updates license terms regularly. Verify current DPA availability directly with OpenAI.
The key takeaway: Free and Plus accounts don’t come with a DPA. If your employees are using personal accounts for work, you’re processing personal data through a third party without the contractual framework GDPR requires. That’s a problem regardless of how careful individual users are.
Team tier is the minimum for business use. It falls under OpenAI’s Business Terms (updated January 1, 2026), which include a DPA. Data entered into Team, Business, and Enterprise accounts is not used for model training by default.
Plus and Pro do offer a toggle to opt out of training. But opt-out toggles don’t replace a DPA. The toggle is a feature setting. A DPA is a legal contract.
What the DPA actually covers
A Data Processing Agreement under GDPR Article 28 is not just a formality. It is the contractual backbone of your relationship with the processor. It must specify:
- The subject matter and duration of the processing
- The nature and purpose of the processing
- The type of personal data and categories of data subjects
- The obligations and rights of the controller (your organization)
- The processor’s obligation to process data only on documented instructions from the controller
- Confidentiality obligations for anyone processing the data
- Technical and organizational security measures under Article 32
- Conditions for engaging sub-processors
- Assistance obligations for data subject rights requests
- Deletion or return of data after the contract ends
OpenAI’s DPA for Team, Business, and Enterprise covers these points. But you should read it. The DPA defines what OpenAI commits to and what remains your responsibility. For example, OpenAI’s DPA states that the controller must determine the lawfulness of its processing. That is your job. The DPA does not make your processing lawful. It makes the processor relationship compliant.
If your legal team or DPO has not reviewed the DPA, that is a gap. Download it, read it, and confirm it meets your requirements before deploying ChatGPT for business use.
Data Residency: Where Does Your Data Actually Go?
This is where it gets more nuanced. Even with a DPA in place, you need to know where data is processed.
EU Data Residency is currently available only for Enterprise, Edu, and API tiers. OpenAI introduced this in February 2025 and expanded it in January 2026 with in-region GPU inference, meaning prompts and completions stay within the EU.
Team and Business tiers process data in the US. That means you need Standard Contractual Clauses (SCCs) and a Transfer Impact Assessment (TIA) to cover the cross-border transfer. The DPA from OpenAI includes SCCs, but you still need to do your own TIA. That’s your responsibility, not OpenAI’s.
If your data classification includes special categories under Article 9 — health data, biometric data, data revealing racial or ethnic origin — think carefully about whether US processing is acceptable for your risk profile, even with SCCs in place.
How to complete a Transfer Impact Assessment
The TIA is often the document organizations skip. They sign the DPA, note that SCCs are included, and move on. That leaves a compliance gap.
A TIA documents your assessment of whether the legal framework in the recipient country provides adequate protection for the transferred data. For US transfers, the EU-US Data Privacy Framework (DPF) applies if the recipient is certified. OpenAI’s DPF certification status should be verified directly.
Your TIA should cover:
- What data transfers to the US. Be specific. Prompts containing personal data, metadata, usage logs. Not “some data.”
- What legal protections apply in the US. Reference the DPF, FISA Section 702 safeguards, and Executive Order 14086 establishing the Data Protection Review Court.
- What technical safeguards the processor implements. Encryption in transit and at rest, access controls, data isolation. OpenAI documents these in their security practices.
- Your assessment of residual risk. Given the legal and technical safeguards, is the level of protection essentially equivalent to EU standards? Document your reasoning.
The TIA does not need to conclude that US protection is identical to EU protection. It needs to demonstrate that you assessed the situation, considered supplementary measures, and made a documented decision. That is what accountability under GDPR Article 5(2) requires.
The “Assumed Compliance” Trap
This pattern is everywhere. A company rolls out ChatGPT Team, checks the DPA box, and writes a one-paragraph policy that says something like “use common sense with sensitive data.”
That’s not an AI usage policy. That’s a hope.
A GDPR-compliant AI usage policy covers:
- Approved tools list. Which AI tools are sanctioned, which tiers, and who has access.
- Data classification rules. What can go into ChatGPT and what can’t. Be specific. “No personal data” is too vague — define categories and give examples.
- Training requirements. Every user needs to understand the basics. Not a 40-page document. A 15-minute briefing with practical dos and don’ts.
- Incident reporting. If someone pastes customer data into a free account, that’s potentially a data breach. Staff need to know how to report it and what happens next.
An AI usage policy under GDPR isn’t a nice-to-have. It’s documentation of your technical and organizational measures under Article 32. Auditors will ask for it.
What a practical AI usage policy includes
Here is a template structure that works for most mid-size organizations:
Section 1: Approved tools. List every AI tool sanctioned for business use, the specific license tier, and the DPA status. Example: “ChatGPT Team (DPA signed March 2026). No other AI tools are approved for processing personal data.”
Section 2: Data classification. Define three to four categories with specific examples. Green: public information, marketing content, internal process notes without personal data. Yellow: internal business data, anonymized statistics, aggregated reports. Red: personal data of any kind, client information, employee records, financial data. Black: special categories under Article 9, data subject to professional secrecy. Green data may be entered into approved AI tools. Yellow data may be entered with caution. Red and black data require explicit approval or must be anonymized first.
Section 3: Usage rules. Specific dos and don’ts. Do: use ChatGPT to draft template correspondence, summarize public information, brainstorm ideas. Do not: paste client emails, upload contracts, enter employee performance data, share financial records. These examples should come from your actual business context, not generic lists.
Section 4: Incident reporting. If an employee enters personal data into a non-approved tool (a free ChatGPT account, for example), that is potentially a personal data breach under Article 33. The policy must define: who to report to, within what timeframe, and what documentation is needed. Your DPO then assesses whether notification to the supervisory authority is required within 72 hours.
Section 5: Consequences. Not punitive by default, but clear. First-time accidental misuse might trigger retraining. Repeated or deliberate violations may trigger disciplinary measures. The point is not to scare employees. It is to signal that the policy is enforceable, not advisory.
Distribute this policy during a 15-minute team briefing. Walk through the examples. Answer questions. Then make it available on your intranet. Update it when tools change.
DPIA: When a Data Protection Impact Assessment Is Required
Article 35 GDPR requires a Data Protection Impact Assessment (DPIA) when processing is likely to result in high risk to individuals. Large-scale use of ChatGPT with personal data meets that threshold for most organizations.
The EDPB ChatGPT Taskforce published its report in May 2024 with preliminary views on lawfulness and transparency requirements. One point stands out: companies cannot shift compliance responsibility to OpenAI through terms and conditions. You are the data controller. OpenAI is the processor. The obligations under Articles 5, 6, and 35 sit with you.
Your DPIA should cover:
- The purpose and necessity of processing personal data through ChatGPT
- The specific risks to data subjects (profiling, automated decision-making, data leakage)
- Measures to mitigate those risks (tier selection, access controls, usage policy, monitoring)
- Residual risk assessment after mitigation
If your DPO hasn’t done a DPIA for ChatGPT yet, that’s the first conversation to have on Monday morning.
Not sure where your organization stands? Take the free AI GDPR Compliance Check — 2 minutes, 7 questions, instant assessment.
What “ChatGPT GDPR Compliant” Actually Means
Here’s the thing people get wrong: “GDPR-compliant” is not a property of the tool. It’s a property of your deployment.
ChatGPT can be deployed in a GDPR-compliant way. But it requires deliberate choices:
- Tier selection. Team at minimum, Enterprise if you need EU data residency.
- Contractual framework. DPA in place, SCCs for US processing, TIA completed.
- Policy and governance. Written AI usage policy, data classification, training for users.
- Risk assessment. DPIA completed, reviewed by your DPO, documented.
- Ongoing monitoring. Shadow AI detection, policy compliance checks, regular review of OpenAI’s terms (they change).
Skip any of these and you have a gap. Cover all of them and you have a defensible position.
This isn’t about perfection. It’s about demonstrating that you’ve thought it through, documented your decisions, and implemented reasonable safeguards. That’s what accountability under Article 5(2) looks like in practice.
Where Do You Stand?
Most organizations are somewhere in the middle. They’ve got the right instinct — they know unmanaged ChatGPT use is a risk. But they haven’t formalized the controls yet.
The gap between “we should do something” and “we’ve done it” is usually smaller than people think. A proper tier, a DPA, a usage policy, and a DPIA. Four things. None of them take months.
I built a free AI GDPR Compliance Check that shows you where you stand in 2 minutes — and where you need to act. 7 questions, instant score, actionable next steps.
Jose Lugo is a CISSP-certified consultant based in Germany, specializing in GDPR-compliant AI deployments for mid-size firms. More at joselugo.de.