Banning AI Doesn’t Work. Here’s What Does.
Some firms decide the safest move is to ban AI entirely. No ChatGPT, no Gemini, no Copilot. On paper, it makes sense. If nobody uses AI, nobody can put data into the wrong tool.
In practice, it fails. Not in a few months. Immediately.
What the Numbers Say
The National Crime Agency and CybSafe found in a joint study that 46% of employees would refuse to stop using AI even if their employer banned it. That is not a small minority. That is nearly half.
Software AG makes it even clearer: 57% of surveyed employees admit to hiding their AI usage at work from their employer.
A ban does not stop usage. It stops visibility.
What that means: employees keep using AI, but you no longer know which tools, which data, which risks. Control does not disappear because people use AI. It disappears because you have no idea they are doing it.
Samsung, Apple, JPMorgan: They All Tried
In May 2023, Samsung banned ChatGPT company-wide. The trigger was an incident where employees had entered confidential source code into ChatGPT. The reaction was understandable. The result was not.
A few months after the ban, Samsung built its own internal AI solution. The reason: productivity losses were too significant. Employees had gotten used to working at AI speed. Without it, efficiency dropped noticeably.
Same pattern at Apple. First a ChatGPT ban, then Apple Intelligence. JPMorgan Chase banned ChatGPT, then launched LLM Suite for 200,000 employees. Deutsche Bank, Amazon, Verizon: the list goes on. The sequence is the same everywhere. Ban, realize it does not work, build a controlled alternative.
None of these companies abandoned the ban on principle. They abandoned it because it failed in practice.
Why Employees Use AI Anyway
It helps to see this from the employee’s perspective. Picture a tax associate who needs to draft a client letter. Without AI, that takes 20 to 30 minutes. With an AI tool, two minutes for a solid draft that she then refines.
This person is not going to stop using AI because an email from management says so. She will use her personal phone. Or her private laptop from home. Or install a browser extension that nobody knows about.
AI tools are not like social media at work. They are productivity tools. People do not use them out of boredom. They use them because they produce better work in less time. And once someone has experienced that, they do not voluntarily go back.
The Real Problem: The Underground
When AI usage is officially banned, it still happens. But it happens in the dark. And that is worse than sanctioned usage. Significantly worse.
Because with hidden AI usage, there is no logging. No overview of what data goes where. No Data Processing Agreement. No legal basis under GDPR. And if something goes wrong, there is no incident response plan, because officially nobody is using AI.
Shadow AI is not a theoretical problem. It happens in companies right now, every day. The only question is whether you can see it or not.
For regulated professionals in Germany, there is an additional dimension. If client data ends up in an unsecured AI tool, that is not just a compliance issue. Section 203 of the German Criminal Code carries up to one year of imprisonment or a fine. “We didn’t know our employees were using it” is not a defense. And responsibility stays with the firm’s leadership.
What Works Instead
The alternative to a ban is not a free-for-all. It is a controlled environment. Employees get AI tools they are allowed to use. Those tools are configured to meet GDPR requirements. And there are clear rules about what is permitted and what is not.
This sounds like more effort than a ban. It is, a little, during initial setup. But it actually works.
It starts with a usage policy. Clear rules on which tools are approved and what data can go in. Not 50 pages, but two or three. Simple enough that everyone on the team can read and understand it in 15 minutes. I described what such a policy looks like here.
Then the technical side. Microsoft Copilot with Enterprise licensing processes data within the EU, within your tenant. But that alone is not enough. Sensitivity Labels need to be configured so Copilot knows which documents it can access. DLP policies need to prevent sensitive data from flowing into unsecured channels. SharePoint permissions need to be correct.
And training. Employees need to know what they can and cannot do. More importantly, they need to understand why these rules exist. The AI Literacy obligation from the EU AI Act (Art. 4) requires this anyway. But even without the legal mandate: a policy nobody knows about is worthless.
What Employees Get Out of It
This is the part many firms forget. Good AI governance is not purely a compliance project. It is also a productivity promise.
When employees get approved tools that work well, they use them. Voluntarily. The tax associate who drafts her client letter in two minutes is happy to do it in the approved tool. The financial advisor does not need a personal ChatGPT account if Copilot can do the same thing.
Most people do not want to circumvent bans. They want to do good work. Give them usable tools, and Shadow AI disappears on its own.
What This Looks Like in Practice
I deploy exactly this combination for law firms, tax advisory firms, and consulting companies. Usage policy, technical configuration, training. None of these pieces work well alone. Together, they give you an AI environment that employees actually want to use and that is also compliant.
This takes days, not months. And it replaces a ban with something that actually works.
If you want to know where your firm stands and whether your current strategy (or your ban) is holding up: I offer a free 30-minute assessment.
Book a call: 30-minute AI compliance assessment
Jose Lugo is a CISSP-certified consultant based in Germany, specializing in GDPR-compliant AI deployments for mid-size firms. More at joselugo.de and services.