Your Employees Are Using AI. You Just Don’t Know It Yet.
I mean this literally. Right now, people in your organization are using ChatGPT, Gemini, or other AI tools. Without approval, without a policy, without you knowing about it.
How do I know? Because the numbers are hard to argue with. And because I see the same picture every time I look at a company’s IT infrastructure.
The Numbers
Software AG surveyed 6,000 workers in Germany. Result: 50% use non-approved AI tools in their daily work. Bitkom found something similar in a separate study: 34% of employees use personal accounts for business AI tasks.
Here is the part that should get your attention: 57% say they hide their AI usage from their employer.
These are not edge cases. This is a pattern. Your employees discovered a tool that makes their work easier. And because there is no official solution, they take the unofficial one.
What Goes In
Think about what employees type into these tools every day. Client emails they want to “make more professional.” Contracts they need summarized. Financial data they want formatted. Customer names alongside their specific requests.
This is not hypothetical. It happens exactly like this. An assistant at a tax advisory firm pastes a client letter into the free version of ChatGPT to polish the wording. A financial advisor enters client data into an AI tool to prepare a presentation. A junior lawyer summarizes a draft contract because the deadline is tight.
Most employees do not think twice about it. For them, ChatGPT is a better Google. They see an input field, type something, get an answer. The fact that their input ends up on a server in Virginia and may become part of training data does not cross their minds.
With the free version of ChatGPT, inputs are used by default to train the model. That is in the terms of service, but who reads those. The data goes to servers in the US, without a Data Processing Agreement, without a legal basis under GDPR. If you want to understand what exactly that means: I broke down the different ChatGPT license tiers and their GDPR implications here.
And the problem goes beyond ChatGPT. Google Gemini, Perplexity, Claude’s free version, DeepL Write, various browser extensions that “AI-powered” rewrite your text. The market is full of tools that process data without a DPA in place. Many employees do not even realize that the browser extension they use for grammar checking sends the entire text to an external server.
For regulated firms in Germany, there is an additional layer. Section 203 of the German Criminal Code (StGB) protects professional confidentiality for lawyers, tax advisors, and auditors. Client data entered into a US-based AI tool without contractual safeguards is not just a GDPR issue. It is a criminal offense. The penalty: up to one year imprisonment or a fine. And professional secrecy obligations apply regardless of whether the client knows about the disclosure.
Why Bans Do Not Work
The obvious reaction: ban AI. No ChatGPT, no Gemini, no Claude at work. Problem solved.
Except it does not work.
Bitkom asked about this too: 46% of respondents said they would continue using AI tools even if their employer banned them. Nearly half. And those are the ones willing to admit it.
Samsung tried. After employees entered confidential source code into ChatGPT, the company-wide ban came in May 2023. A few months later, the reversal: Samsung built an internal AI solution because the productivity losses were too significant. Apple had a similar ban, then came Apple Intelligence. JPMorgan Chase banned ChatGPT, then developed LLM Suite for 200,000 employees. Deutsche Bank, Amazon, Verizon: the list goes on. They all realized that banning AI does not solve the problem. It pushes it underground.
The reason is straightforward. AI tools are genuinely useful. Employees use them because they work faster and get better results. A letter that used to take 45 minutes is done in 10. Research that ate up half a day delivers usable results in minutes. Banning that penalizes productivity. And employees find workarounds. Personal phone, private browser, mobile hotspot instead of the company network. Shadow AI does not disappear with a ban. It just becomes invisible.
What Works Instead: An AI Usage Policy
The alternative to a ban is regulation. Not bureaucracy, but clear rules that tell employees: this is what you can do, and this is how.
An AI usage policy needs to answer three questions.
First: Which tools are approved? Not “AI is fine,” but specific. ChatGPT Team, Microsoft Copilot with Enterprise licensing, or whatever you deploy. With the exact tier and a note that personal accounts are off-limits for business data.
Second: What can be entered? This needs to be specific too. Public information and general text drafts are usually fine. Personal data, client data, financial data, contract content: only in approved tools with a proper Data Processing Agreement. And some data has no place in any AI tool, regardless of the license.
Third: What happens if someone breaks the rules? Not framed as a threat, but clear. Employees need to know there is a process. That an accidental slip is handled differently from a repeated violation. And that there is a reporting channel if someone accidentally puts data into the wrong tool.
This does not need to be 50 pages. A solid usage policy fits on two to three pages and is presented in a 15-minute team meeting. I will publish a practical template for this in an upcoming post.
What the regulator will ask
At some point, there will be an audit. Maybe triggered by a complaint, maybe a routine check, maybe a data breach you have to report. And the supervisory authority will ask questions.
The first question will be: Does your organization use AI tools? If so, which ones? Under what legal basis? Is there a Data Processing Agreement?
The second question: Is that documented in your Record of Processing Activities under GDPR Article 30? For most organizations, it is not. Because the record has not been updated since 2018. Or because nobody knows employees are using AI.
The third question: Is there a Data Protection Impact Assessment (DPIA)? Article 35 GDPR requires a DPIA when processing is likely to result in high risk. Personal data in a US-based Large Language Model? That qualifies.
The correct answer to none of these questions is “We didn’t know.” Because that is exactly the problem with Shadow AI. It happens without your knowledge, but the responsibility stays with you.
The Real Fix: Approved Tools with Compliance Architecture
A policy alone is not enough. You also need the infrastructure.
That means: AI tools that employees actually want to use (because they are good) and that simultaneously meet GDPR requirements. This is not a contradiction. Microsoft Copilot with the right Enterprise configuration processes data within the EU, within your Microsoft 365 tenant, respecting existing access controls and covered by your Data Processing Agreement.
The difference from Shadow AI: data stays where it belongs. In your infrastructure, under your control, covered by existing contracts.
What this requires:
- The right license. Not every Microsoft 365 license includes Copilot. And not every Copilot license has the same data protection settings. Configuration matters.
- Clean permissions. Copilot searches everything a user has access to. If your SharePoint permissions are a mess, Copilot will show that with brutal clarity. That is not a bug. It is a feature that exposes existing problems.
- Data classification. Sensitivity Labels in Microsoft 365 control which documents Copilot can access. Without labels, there is no control.
- Monitoring. You need visibility into how AI tools are being used. Not to surveil employees, but to verify that your policy is working and no data is leaking into unapproved channels.
This is a one-time setup, not an ongoing project. If you already run Microsoft 365 E3 or E5, most of the infrastructure is already there. The licenses, the contracts, the admin console. What is missing is the configuration. And that is typically a matter of days, not months.
What to Do Now
Ignoring this is not an option. Banning it does not work. The question is not whether your employees use AI. It is whether they are using it with the right tools and the right rules.
Start with an inventory. Find out which AI tools are in use across your organization. Firewall logs, anonymous survey, honest conversation with your teams. Not as surveillance, but as an honest assessment.
Then: create a usage policy. Clear rules on which tools are allowed and what data can go in. Short, understandable, with concrete examples.
And then: provide an approved solution. Give your employees a tool they want to use that also happens to be compliant. Once that exists, there is no reason for Shadow AI anymore.
If you want to understand how large the Shadow AI problem is in your organization and what steps would make sense: I offer a free 30-minute assessment. No sales pitch, just an honest evaluation of where you stand and what to do next.
Book a call: 30-minute AI compliance assessment
Jose Lugo is a CISSP-certified consultant based in Germany, specializing in GDPR-compliant AI deployments for mid-size firms. More at joselugo.de and services.