Every major AI tool ships with the same default assumption: your data isn’t sensitive.
That works fine if you’re summarizing meeting notes or brainstorming marketing slogans. It doesn’t work if you’re a tax advisor, a lawyer, or a financial planner in Germany handling client data protected by criminal law.
I’ve spent the last year looking at how firms in regulated industries adopt AI tools. The pattern is always the same. Someone activates ChatGPT or Copilot. Nobody changes the defaults. Client data starts flowing through systems that weren’t designed for professional secrecy. And nobody realizes there’s a problem until someone asks the right question.
The Default Settings Aren’t Designed for You
Generic AI tools are built for the broadest possible market. OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini. They’re consumer products that happen to have business plans. Their default configurations reflect that.
Out of the box, most of these tools do one or more of the following:
- Process data outside the EU. ChatGPT Free, Plus, and Pro route data through US servers. There’s no EU data residency option at those tiers.
- Lack a data processing agreement. GDPR Article 28 requires a DPA between the controller (your firm) and the processor (the AI provider). Without an enterprise license, most providers don’t offer one.
- Provide no audit trail. Queries and responses aren’t logged in a way that your firm controls. If a regulator asks what data entered the system, you can’t answer.
- Don’t restrict internal access. Copilot searches everything a user has permission to see. If SharePoint permissions are loose — and in most firms they are — Copilot will surface client documents to people who shouldn’t see them.
None of this is a secret. It’s documented in the providers’ own terms. The problem is that most firms never read those terms before activating the tool.
The SharePoint permissions problem in detail
Copilot’s permission inheritance deserves special attention because it is the most common source of unintended data exposure. Here is how it works:
When a user asks Copilot a question, Copilot queries the Microsoft Graph API with that user’s permissions. It searches SharePoint, OneDrive, Exchange, and Teams. Every document, email, and chat message the user can access is fair game for Copilot’s response.
The problem is that SharePoint permissions in most organizations are a mess. Sites created years ago with “Everyone except external users” access. Team channels where former employees still have membership. Document libraries where the original creator shared broadly because it was easier than setting up proper groups.
Nobody noticed because nobody was searching across all of these at once. Users accessed documents they knew about. Copilot searches everything. A marketing intern asking “What are our current client contracts?” might get results from the legal team’s SharePoint site because someone added “All Staff” to that site in 2021 and never removed it.
This is not a hypothetical. It is the default state of most M365 environments. Before activating Copilot, run a permissions audit. Microsoft provides tools for this: SharePoint admin center reports, PowerShell scripts for bulk permission exports, and Microsoft Purview for data governance. The audit should cover every site containing client data or confidential information.
This Isn’t Just About GDPR Fines
GDPR gets all the attention because the fines are large and the headlines are dramatic. But for professionals bound by Section 203 of the German Criminal Code (Strafgesetzbuch), the stakes go beyond administrative penalties.
Section 203 StGB makes it a criminal offense for certain professionals, including Steuerberater, Rechtsanwälte, and Wirtschaftsprüfer, to disclose client secrets without authorization. The penalty is up to one year of imprisonment or a criminal fine. This is Berufsgeheimnis: professional secrecy backed by criminal law.
When a tax advisor pastes client financial data into an AI tool that processes it on US servers without a DPA, that’s a potential Section 203 violation. It doesn’t matter that the intent was to save time. It doesn’t matter that the data wasn’t “leaked” in the traditional sense. The disclosure happened when the data left the firm’s controlled environment.
GDPR Article 9 makes it worse. Health data, financial data, data revealing racial or ethnic origin. These special categories require explicit legal basis for processing. Dropping them into a generic AI prompt doesn’t meet that standard.
The combination of Section 203 and GDPR creates a dual obligation. You need both the criminal-law safeguards and the data protection framework. Generic AI satisfies neither by default.
Where Generic AI Breaks Down: A Practical Comparison
| Requirement | What Regulated Firms Need | What Generic AI Provides |
|---|---|---|
| Data residency | Processing within the EU (GDPR Art. 44-49) | US processing by default; EU option only at enterprise tier |
| Data processing agreement | Signed DPA per GDPR Art. 28 | Available only on Team/Enterprise plans |
| Audit trail | Logged queries and responses under firm’s control | No firm-controlled logging |
| Access controls | Role-based access aligned with Berufsgeheimnis | User-level permissions inherited from existing (often broken) setup |
| Data isolation | Client data separated from training data | Training opt-out varies by plan; not guaranteed on free tiers |
| DPIA documentation | Required under GDPR Art. 35 for AI processing | Not provided; firm must create its own |
This isn’t a scare-tactic checklist. It’s what compliance actually looks like when your professional obligations include criminal secrecy. Every item on this list is solvable. But none of them are solved by the default configuration.
What Compliant AI Actually Requires
Compliant AI for regulated firms isn’t a different product. It’s the same technology with the right configuration. In practice:
1. The right license tier. For ChatGPT, that means Team at minimum (DPA available, no training on your data). For Copilot, that means M365 E3/E5 with proper configuration. Free and consumer plans are off the table for client data.
2. EU data residency. Data must be processed and stored within the European Union. Microsoft offers EU Data Boundary for M365. OpenAI offers EU processing for Enterprise customers. This needs to be confirmed in writing, not assumed.
3. Access controls that match your secrecy obligations. Before activating Copilot, audit your SharePoint permissions. Apply sensitivity labels to client data. Restrict Copilot’s search scope so it only surfaces documents a user is authorized to see — not just technically able to access.
4. A data protection impact assessment. GDPR Article 35 requires a DPIA for AI-driven processing of personal data. This documents what data flows through the system, what risks exist, and how you mitigate them. Microsoft and OpenAI provide templates, but the responsibility sits with your firm.
5. An internal AI policy. What can employees enter into AI tools? What’s off-limits? “Use common sense” isn’t a policy. Specific examples and clear boundaries are.
What to Do Monday Morning: A Four-Week Implementation Plan
Knowing the requirements is one thing. Implementing them is another. Here is a practical timeline for a firm of 15 to 50 employees.
Week 1: Inventory and decision
Day 1-2: Shadow AI audit. Survey all staff on which AI tools they currently use for work. Include browser extensions, mobile apps, and personal accounts. Be clear that the purpose is to build proper controls, not to punish anyone. You will almost certainly discover that several employees already use ChatGPT with client data.
Day 3-4: Tool and tier decision. Based on the inventory, choose your standard tools and license tiers. For ChatGPT: Team at minimum. For Copilot: confirm you have M365 E3/E5. For any other tools: verify DPA availability and data residency.
Day 5: Sign the DPA. For OpenAI Team/Enterprise, the DPA is available through the admin dashboard. For firms under Section 203, determine whether a supplementary secrecy agreement is needed and engage legal counsel if so.
Week 2: Policy and training
Day 1-3: Write the AI usage policy. Keep it to two pages maximum. Cover: approved tools, data classification with examples, usage rules, incident reporting, consequences. Use real examples from your practice — “Do not enter Mandant tax assessments into ChatGPT” is clearer than “Do not enter personal data.”
Day 4-5: Staff briefing. A 15-minute session covering the policy, with Q&A. Record attendance. This is part of your Article 32 documentation of organizational measures.
Week 3: Technical configuration
If using Copilot:
- Run a SharePoint permissions audit. Export permissions for all sites containing client data.
- Remediate: remove overly broad access groups, create role-based groups, move sensitive documents to restricted libraries.
- Deploy sensitivity labels. At minimum: Public, Internal, Confidential, and Highly Confidential (Section 203) if applicable.
- Configure auto-labeling rules for common patterns: tax ID numbers, client reference formats, contract headers.
- Enable DLP policies to prevent external sharing of Confidential and Highly Confidential documents.
If using ChatGPT Team:
- Set up the workspace in the admin console.
- Migrate users from personal accounts to the Team workspace.
- Disable personal account access to ChatGPT on corporate devices if possible (browser policy, firewall rules).
- Verify that training is disabled in the workspace settings.
Week 4: Documentation
DPIA. Complete the data protection impact assessment. Document the processing, the risks, the mitigation measures, and the residual risk. Consult your DPO.
Records of processing. Update your Article 30 records to include AI tool processing. Document the processor (OpenAI/Microsoft), the data categories, the legal basis, and the retention periods.
Transfer Impact Assessment. If data is processed in the US, complete the TIA documenting your assessment of data protection adequacy.
This is not a perfect implementation. Perfect takes months and costs a fortune. This is a defensible implementation that addresses the major compliance requirements and puts your firm in a position where you can answer a regulator’s questions with documentation, not excuses.
The Point Isn’t to Avoid AI
Firms that avoid AI entirely aren’t being cautious. They’re falling behind. AI tools genuinely save time on document review, correspondence drafting, research, and analysis. The productivity gains are real.
But there’s a difference between using AI and using AI correctly. For firms with professional secrecy obligations, “correctly” means configured, documented, and on the right license. Not because regulators might check. Because your clients trust you with their most sensitive information, and that trust deserves the same rigor you apply to everything else in your practice.
The gap between generic AI and compliant AI isn’t as wide as most people think. It’s configuration, documentation, and a license upgrade. Most firms can close it in two to four weeks.
Not sure where your firm stands? The AI Compliance Check takes 2 minutes and shows where action is needed.
Prefer to talk directly? Book a free 30-minute consultation — no sales pitch, just an honest assessment.
Jose Lugo is a CISSP-certified AI compliance consultant based in Germany. He helps tax advisors, law firms, and financial planners deploy AI tools that meet GDPR and Section 203 StGB requirements.