AI Literacy Obligation: What the EU AI Act Requires From Your Company

Since February 2, 2025, every company using AI must ensure employees have sufficient AI competency. Article 4 of the EU AI Act applies regardless of company size. Here's what it means in practice.

Most German SMBs don’t know this obligation exists.

Since February 2, 2025, every company using AI has been required to ensure that its employees have a sufficient level of AI competency. This isn’t a recommendation. It’s Article 4 of the EU AI Act (Regulation (EU) 2024/1689). And it’s already in effect.

I barely see this being discussed. When people talk about the EU AI Act, the conversation usually revolves around high-risk AI systems, prohibited practices, and obligations for the big tech providers. Article 4 gets almost no attention. Which is a problem, because it’s the provision that actually affects the most businesses right now.

What Article 4 Actually Says

The text is short. Providers and deployers of AI systems must take measures to ensure that their staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy.

The regulation keeps this deliberately open. No prescribed training program. No certification you need to show. Instead, the competency must be proportional to the context, the technical knowledge, and the experience of the people involved.

Sounds vague? It is. But “vague” does not mean “non-binding.” The obligation is in the legal text. And regulators will interpret it once they start enforcing.

Who Does This Apply To?

Everyone. Every company using AI tools.

The law doesn’t differentiate by company size. There’s no exemption for small firms, no threshold based on employee count. The key term is “deployer.” If your employees use AI tools in their work, you’re a deployer under this regulation.

In practice: a five-person tax advisory firm using ChatGPT for research is just as covered as a 50-person law firm rolling out Microsoft Copilot. The financial advisor with AI-powered analysis tools? Also covered.

If you’re wondering whether this applies to you, ask your employees what tools they use day to day. The answer will probably surprise you. ChatGPT, Copilot, DeepL, AI-assisted translations in Outlook. AI is already there, even if nobody officially decided to adopt it.

What “Sufficient AI Literacy” Means in Practice

The regulation doesn’t prescribe specific content. But from the context and the recitals, the intent is clear. Employees need to understand:

  • Which AI tools they’re using and what those tools actually do
  • How those tools process data (locally, cloud, EU, outside the EU)
  • Where the limitations and risks are (hallucinations, bias, unreliable outputs)
  • What internal rules apply to AI usage

Here’s the part that matters: the competency must match the role. A partner deciding whether to adopt AI tools needs a different level of understanding than an assistant using Copilot for email summaries. The partner needs to assess strategic and legal implications. The assistant needs to know what data they can input, what they can’t, and when not to blindly trust the output.

This isn’t a theoretical distinction. When a regulator asks how you ensure AI literacy, they want to see that you’ve thought about it. A generic training video for everyone won’t cut it. Your approach needs to match how AI is actually used in your organization.

The Timeline

The obligation has been in effect since February 2, 2025. That’s not a typo. The literacy requirement from Article 4 is one of the provisions that became effective immediately, not on the later deadlines for high-risk systems.

National enforcement begins August 2, 2026. By that date, member states must have designated supervisory authorities and established penalty frameworks. That’s when audits start.

Less than four months from today. No grace period has been announced.

I’m not saying this to create pressure. I’m saying it because many companies have the August 2025 date in mind (prohibited AI practices) or February 2027 (high-risk obligations) and are overlooking the fact that the literacy obligation is already active and enforcement is right around the corner.

What Documentation You Should Have

The law doesn’t specify a particular documentation format yet. The concrete requirements will become clearer through regulatory guidance and national implementation.

Still, documentation protects you. When a supervisory authority asks “How do you ensure your employees’ AI literacy?”, they want to see a system. It doesn’t need to be perfect. It needs to show you’ve actually thought about this.

What I recommend:

First: training records. Who was trained, when, on what? A simple spreadsheet works. Name, date, training content, signature. Not a big lift, but it’s the difference between “we did something” and “here’s the proof.”

Second: an approved tool register. Which AI tools are authorized in your company? Why were they approved? What license tier? Where is data processed? Put it in one document so you don’t have to improvise during an audit.

Third: an internal usage policy. What can employees do with AI tools, what can’t they do? What data goes in, what stays out? Who do they ask when something is unclear? I’ve written about this in detail in Why Generic AI Tools Fail Regulated Firms.

These three things together show a regulator: you’ve engaged with the topic. That’s not a guarantee against penalties, but it’s the foundation of any defense.

How to Get Started

If you haven’t done anything in this direction yet, start small. You don’t need to solve everything at once.

Start with an inventory. Find out which AI tools are actually in use at your company. Ask your employees directly. Check installed software and browser extensions. The results are often surprising.

Then write a usage policy. One page. Which tools are allowed, what data may be entered, what’s off limits. No 30-page handbook, no legal jargon.

Then run a training session. 60 minutes with the whole team. What is AI? Which tools do we use? What rules apply? Document attendance, and you’re done.

This sounds simple because it is simple. The Article 4 obligation doesn’t require a massive budget. It requires that you engage with the topic and can prove it.

I include AI literacy documentation in every compliance engagement I do. Training, policy, records. Without that, the rest doesn’t hold up. You can have the best technical configuration in the world. If the people using it don’t know what they’re doing, it doesn’t matter.

This Isn’t About Panic

The AI literacy obligation isn’t a scare tactic. It’s common sense: if you deploy AI tools, your people should understand what those tools do. The EU just put it in writing.

Whether you agree with the regulation or not doesn’t change the fact that it’s law. And starting August 2026, someone will ask if you’re ready.


Not sure where your company stands? The AI Compliance Check takes 2 minutes and shows where action is needed.

Prefer to talk directly? Book a free 30-minute call. No sales pitch, just an honest assessment of where you are.


Jose Lugo is a CISSP-certified AI compliance consultant based in Germany. He helps tax advisors, law firms, and financial advisors deploy AI tools in compliance with the GDPR, including employee training and AI literacy documentation under the EU AI Act.