34% of German employees use AI at work with personal accounts. That’s the Bitkom number. Software AG puts it at 50% in a separate study. Both studies say the same thing: most companies have no rules for AI usage.
No rules means no control. No control means data ends up where it shouldn’t.
That’s why you need an AI acceptable use policy.
What Happens Without a Policy
Without a documented policy, every employee makes their own rules. The assistant copies client correspondence into free ChatGPT to improve the wording. The analyst uses Copilot for everything without thinking about what data the tool can access. And the colleague down the hall won’t touch AI at all because she’s afraid of getting in trouble.
All three are acting in good faith. All three are wrong.
The problem isn’t that people use AI or don’t use AI. The problem is that there’s no framework. No approval, no prohibition, no guidance. Everyone guesses.
For regulated firms, this hits harder. Law firms, tax advisors, financial consultants. Client data is subject to professional confidentiality obligations. In Germany, that’s Section 203 of the Criminal Code. Entering client data into an AI tool without proper contractual safeguards is a GDPR violation. In Germany, it can also be a criminal offense.
A use policy doesn’t fix everything on its own. But it’s the foundation. The document that says: “This is how we use AI in this company.” Two to four pages. Not a novel.
What Goes Into the Policy
Let me walk through the sections a solid AI use policy should cover. Not as a rigid template, but as a structure you adapt to your firm.
Approved and Blocked Tools
The most important section. This is where you list, by name, which AI tools are permitted and which are not.
“Microsoft Copilot for M365 is approved. ChatGPT Free is not approved. ChatGPT Plus is not approved. Perplexity is not approved.”
Why the distinction? Because not every ChatGPT license comes with a Data Processing Agreement. The free version uses your inputs to train the model. Without a DPA, you’re missing the contractual basis required by GDPR Article 28. That applies to Plus and Pro as well.
If you never told your employees what’s allowed, you can’t blame them for guessing wrong.
What Must Never Be Entered Into Any AI Tool
Even with approved tools, some inputs are off-limits. This section draws the line.
Client names. Client case data. Financial details. Personal data of any kind. Internal strategy documents. Draft contracts with real names and figures. Health data. Anything covered by professional secrecy.
The rule is simple: if you wouldn’t hand the information to a stranger on the street, don’t type it into an AI tool. Not even an approved one.
Data Classification and AI
Many firms already have a data classification scheme. The use policy needs to connect that classification to AI usage. If you don’t have one yet, now is a good time.
The principle: not all data is equally sensitive. Your policy maps each category to an AI clearance level.
Confidential data, meaning anything tied to specific clients, HR records, financial data, that goes into no AI tool. Period. Internal data, like general process descriptions or anonymized examples, can go into approved tools. Public information can go into any tool.
Sounds straightforward. It is. But without this mapping, every employee has to make a judgment call on every single input. That doesn’t scale.
What to Do When Someone Makes a Mistake
Someone will eventually enter sensitive data into the wrong tool. It happens. The question is whether there’s a reporting process or whether the employee hides it out of fear.
Your policy needs a section for this. Who do you notify? How quickly? What are the next steps? Is there a reporting obligation to the supervisory authority under GDPR Article 33?
This is critical: no blame. If employees fear consequences, they report nothing. Then you find out when the regulator comes knocking. That’s worse.
Training
A policy that nobody reads is useless. The training section defines how employees get familiar with the rules. When does the initial training happen? Are there regular refreshers?
Since February 2025, the EU AI Act (Article 4) requires that companies deploying AI systems ensure their staff has “sufficient AI literacy.” That’s not a suggestion. It’s EU law. Your use policy and the training behind it are a solid first step toward meeting that requirement.
Ownership
Who owns the policy? Who keeps it current? Who decides if a new tool gets approved? Who grants exceptions?
In small firms, that’s often the managing partner. In larger setups, it might be the data protection officer or IT lead. What matters is that there’s a name attached. Not “management.” A person.
Policy Alone Isn’t Enough
Here’s the part many firms miss. A use policy is a document. It says: “Don’t paste client data into ChatGPT.” Fine. But what actually stops someone from doing it?
Nothing. Except the hope that everyone read the policy and follows it.
That’s why you need both: the policy and technical enforcement. Sensitivity Labels in Microsoft 365 that prevent documents marked “confidential” from being pasted into AI tools. DLP policies that detect and block certain data patterns. Conditional Access that blocks access to unapproved AI services entirely.
The policy without technical controls is a suggestion. Technical controls without a policy have no legal basis. You need both.
What Not to Do
Three things I see regularly that don’t work.
First: 40-page documents. Nobody reads a 40-page rulebook. Keep the policy to two to four pages. Anything beyond that goes into separate annexes that get referenced, not embedded.
Second: blanket bans. “AI usage is prohibited in our firm.” Sounds safe. It achieves the opposite. Employees use the tools anyway, just secretly. Studies show over half of employees hide their AI usage from their employer. A total ban pushes usage underground, where you can’t see or control it.
Third: copying a template off the internet without adapting it. Every firm uses different tools, processes different data types, runs different workflows. A generic template that doesn’t match your specific infrastructure gives employees false confidence. Or worse, it approves tools that aren’t properly secured in your environment.
Next Step
I build AI use policies as part of every Copilot compliance engagement. Because the policy and the technical implementation belong together. The policy says what’s allowed. Sensitivity Labels, DLP, and Conditional Access enforce it. And training makes sure your team understands both.
Want to know where your firm stands? My free AI Compliance Check shows you in 2 minutes where the gaps are.
Or book a free 30-minute call directly. We’ll look at your current setup together and figure out what you need.