What 2026 Means for AI Compliance in Germany: Deadlines, Obligations, Risks

EU AI Act enforcement, Copilot forced migration, GDPR audit focus on AI. The key deadlines for 2026 at a glance.

What 2026 Means for AI Compliance in Germany: Deadlines, Obligations, Risks

2026 is the year AI regulation gets serious in Germany. Not because a single law takes effect, but because several developments converge at once. Regulation, enforcement, and market changes are colliding. And most companies are not prepared.

I am not writing this to spread panic. I am writing it because I see that many business owners do not have these deadlines on their radar. Or they know individual dates but miss the full picture.

Here is the timeline.

February 2, 2025: AI Literacy Obligation Already Applies

This date is in the past, but it belongs here because many companies missed it. Since February 2, 2025, all companies that use AI must ensure their employees have sufficient AI competency. That is in Article 4 of the AI Act (EU) 2024/1689.

I broke this down in a dedicated post. The short version: there is no prescribed training format. But the obligation is binding, regardless of company size. And enforcement is coming in August.

What this means in practice: if your employees use AI tools, whether officially introduced or as shadow AI, you must be able to demonstrate that those employees know what they are doing. Which tools they use, how those tools process data, where the limits are, which internal rules apply.

April 15, 2026: Microsoft Removes Free Copilot Chat

On April 15, 2026, Microsoft removed the free Copilot Chat feature from Office applications. No more free Copilot in Word, Excel, PowerPoint, and Outlook for business users. Anyone who wants to continue using the AI features needs a paid Copilot license.

Why is this relevant for compliance? Because this change triggers a wave of Copilot licensing. Companies that have been experimenting with the free version face a decision: buy licenses or give it up. And many will buy.

The problem: Copilot with a license has access to your entire Microsoft 365 environment. SharePoint, OneDrive, Teams, Exchange, everything the user has access to. If permissions in SharePoint are not cleanly configured, Copilot surfaces documents the user should not be seeing. I described the oversharing problem with Copilot and SharePoint in detail.

Many companies will roll out Copilot licenses in the coming weeks. Without a DPIA. Without a permissions review. Without sensitivity labels. That is the perfect storm for data protection incidents.

August 2, 2026: EU AI Act Enforcement for High-Risk AI

This is the date everyone should remember. Starting August 2, 2026, national enforcement of the EU AI Act begins for high-risk AI systems. By then, member states must have designated supervisory authorities and introduced sanctions rules.

Article 99 of the AI Act allows fines up to EUR 35 million or 7% of global annual turnover for the most serious violations. That is a higher ceiling than the GDPR.

What qualifies as high-risk AI is defined in the law. For law firms and financial advisors, certain use cases are relevant: AI systems used in the administration of justice, AI for creditworthiness assessments, AI in personnel decisions. If you use Copilot or other AI tools for such purposes, additional obligations may apply: a risk management system, data quality requirements, technical documentation, transparency obligations toward users.

The literacy obligation from Article 4 will also be enforced from this date. Supervisory authorities can audit and sanction.

Throughout 2026: GDPR Supervision Focuses on AI

Data protection supervisory authorities have named AI as an audit priority. The DSK (Germany’s data protection conference), the BfDI (Federal Commissioner for Data Protection), and several state commissioners have announced this. Specifically, that means: increased audits on whether companies have conducted DPIAs for AI tools, whether data processing agreements exist, whether records of processing activities are current.

This affects companies of all sizes. Supervisory authorities do not only audit large corporations. Complaints from data subjects, data breach notifications, or routine checks can land at a firm with five employees just as easily.

The GDPR requirements for AI tools like Copilot are nothing fundamentally new. DPIA, DPA, legal basis, records of processing. But with AI tools, these requirements are scrutinized more closely because the risks are higher than with conventional data processing.

NIS2 and AI: The Overlooked Connection

The NIS2 Directive (Network and Information Security Directive) is primarily a cybersecurity law. But it has implications for AI compliance. NIS2 requires certain companies and organizations to manage risks in their supply chain. AI tools are part of the supply chain.

If your company falls within the scope of NIS2 (and the scope is broader than many think), you must assess the risks of your AI providers. Where is data processed? Which subprocessors are involved? What does the provider’s security concept look like?

For most law firms and tax advisors, NIS2 does not apply directly. But if you advise or serve clients who fall under NIS2, those clients will ask you questions about your own IT security. Including AI usage.

The High-Risk Question With Copilot

This is a point that is not discussed much yet. Microsoft Copilot is a tool. Whether it qualifies as high-risk AI depends not on the tool itself but on the use case.

If a firm uses Copilot to summarize emails or generate calendar suggestions, that is not a high-risk application. If the same firm uses Copilot to support legal research, analyze contracts, or create employee evaluations, we may be in high-risk territory.

The distinction is not always clear. And that is precisely the risk: many companies do not know whether their AI usage falls under high-risk. Without that assessment, they cannot fulfill the obligations that will be enforced from August 2026.

What You Should Have in Place by August 2026

Here is the checklist. Not as a theoretical wish list, but as the minimum that must withstand an audit.

A DPIA for every AI tool that processes personal data in your company. That includes Copilot, but also ChatGPT Team, Claude, or whatever else is in use.

An AI usage policy that has been documented and distributed to all employees. With dates, signatures, verifiable.

Proof that the AI literacy obligation has been met. Training conducted, participation documented, content documented.

Sensitivity labels and DLP policies that are configured and control which data AI tools may access.

A current record of processing activities that includes AI processing.

A current subprocessor register. If you use Copilot, Microsoft processes data. If Microsoft uses Anthropic as a subprocessor (which is the case for certain features), that belongs in the register too.

Less Than Four Months

Between today and August 2, 2026, there are less than four months. That is not much time, but it is enough. Most of the items above can be implemented in days, not months. Provided you start now.

If you are not sure where your company stands: I offer a free 30-minute assessment. We go through your current situation and identify the biggest gaps.

Book a consultation: 30-minute AI compliance assessment


Jose Lugo is a CISSP-certified AI compliance consultant based in Germany. He helps tax advisors, law firms, and financial services firms deploy AI tools in compliance with GDPR. Learn more at joselugo.de/en and Services.