Getting Started With AI: 5 Steps Before You Choose a Tool

Before you roll out Copilot or ChatGPT: these 5 steps get your company into AI usage that's GDPR-compliant from the start.

Getting Started With AI: 5 Steps Before You Choose a Tool

You know AI is coming. Or rather: it is already here. Maybe you have decided to officially introduce AI. Maybe a client asked whether you use AI. Maybe you simply noticed that your employees have been using ChatGPT for months.

In every case, the same question comes up: Where do I start?

Most companies start wrong. They buy licenses. Copilot, ChatGPT Team, whatever the IT provider recommends. Tool rolled out, done. Three months later they discover that permissions are a mess, no policy exists, and the GDPR side is wide open.

I see this constantly. That is why I wrote down the five steps that come before the tool choice. In this order.

Step 1: Find Out What Your Employees Are Already Using

Before you think about a new tool, you need to know what is already in use. And I do not mean the officially approved software.

Ask your employees directly. Without blame, without consequences. Simply: Which AI tools do you use in your daily work? Private ChatGPT accounts? Perplexity? DeepSeek? Browser extensions that rewrite text?

The answers will surprise you. In practically every company I look at, I find tools that management did not know about. That is not a criticism of the employees. The tools are useful, and there is no official alternative. So they take the unofficial one.

Alternatively or additionally: check your firewall logs or DNS requests. Domains like chat.openai.com, gemini.google.com, perplexity.ai show up there when they are being used. That gives you a picture without having to ask.

The result of this step is your shadow AI baseline. You now know where you stand. Not where you think you stand. I covered the topic of shadow AI in a dedicated post.

Step 2: Create a Usage Policy

Before any tool is officially introduced, you need ground rules. An AI usage policy answers three questions:

Which tools are approved? Not “AI is okay,” but specifically. ChatGPT Team with a business license. Microsoft Copilot with Enterprise Data Protection. Or whatever you plan to use. Private accounts for business data are not permitted.

Which data may be entered? This also needs to be specific. General research questions and text drafts without personal data: generally not a problem. Client data, financial data, contract contents: only in approved tools with a data processing agreement. Some data has no business being in any AI tool.

What happens when someone violates the policy? Clearly stated, not as a threat. Employees need to know there is a process. And that there is a way to report when data was accidentally entered into the wrong tool. Mistakes happen. What matters is that they get reported.

A good policy fits on two to three pages. It gets introduced in a team meeting, not sent as a PDF by email and forgotten. I wrote a detailed post on AI usage policies if you want to go deeper.

Step 3: Choose a Tool Based on Data Protection Criteria

Now, after steps 1 and 2, you may talk about tools.

The selection is not based on features. Not on marketing demos. Not on what a colleague saw at a conference. The selection is based on three questions:

Is there a data processing agreement (DPA)? Without a DPA, using the tool for personal data is not GDPR-compliant. Period.

Where is the data processed? EU data centers are the standard you need. US processing is possible if appropriate safeguards exist (standard contractual clauses, Data Privacy Framework). But it complicates everything.

Can a DPIA be conducted? You need to be able to assess the processing risks. That requires transparency from the provider: What happens to the data? Is it used for training? How long is it stored? Who has access?

For companies already using Microsoft 365, Copilot is often the obvious choice. Not because Microsoft has the best AI, but because the infrastructure is already there: DPA, EU data centers, integration with existing permission structures. But Copilot must be configured. Without configuration, Copilot inherits every permission problem that exists in your SharePoint.

I compared the common AI tools regarding their GDPR suitability. That can help with the selection.

Step 4: Run a Compliance Assessment

You have now chosen a tool. Before it gets rolled out to employees, the assessment comes. A DPIA under Article 35 GDPR. A permissions review, if the tool accesses existing company resources (with Copilot, that means SharePoint, OneDrive, Teams, Exchange). Sensitivity labels and DLP policies that control which data the tool may access.

This is the step where I come in. Not because steps 1 through 3 are trivial, but because step 4 is where technical and regulatory requirements converge. A DPIA for an AI tool is not a form to fill out. You need to understand how the tool processes data, where the risks lie, and which measures reduce those risks to an acceptable level.

For Copilot specifically, that means: review SharePoint and OneDrive permissions. Create or adjust sensitivity labels. Configure DLP policies. Ensure the audit log is active and usage remains traceable. Test whether the configuration does what it should.

That sounds like a lot. In practice, it takes days. If the Microsoft 365 infrastructure is cleanly set up, it goes faster. If permissions are a mess (which is common), it takes a bit longer. But it remains manageable.

Step 5: Start With a Pilot Group, Then Expand

Do not roll the tool out to everyone at once. Start with a small group. Five people, maybe ten. Employees who are motivated and willing to give feedback.

The pilot phase is not just about testing the technology. It is about testing the policy. Are the rules understandable? Are there situations the policy does not cover? Are employees using the tool the way you envisioned? Or are they working around restrictions because they are impractical?

Collect feedback after two to four weeks. Adjust labels, DLP policies, and the policy if needed. Then expand gradually.

This approach has another advantage: the pilot group becomes your internal AI experts. The colleagues who used it first and can show others how it works. That is more effective than any external training session.

What You Should Not Do

Do not skip steps 1 and 2 and go straight to step 3. Most companies do exactly that. Buy the tool, distribute licenses, think about compliance someday. That reliably leads to problems.

Do not buy licenses before the compliance assessment is complete. Licenses cost money. If the assessment reveals that your SharePoint permissions need cleanup first, the licenses sit unused for months.

And do not assume your IT provider covers the GDPR side. IT providers configure software. GDPR compliance is a different discipline. Most IT providers set up Copilot technically and leave the data protection side to the client. That is not a criticism of IT providers. It is simply not their core business.

How I Work

My engagements follow exactly these five steps. Inventory of current AI usage. Create or review the usage policy. Tool evaluation based on data protection criteria. Compliance assessment with DPIA, permissions review, labeling, DLP. Pilot phase support.

That is a structured process with a clear beginning and end. Not an ongoing consulting contract that drags on for months.

If you do not know where to start: start with step 1. And if you need support with that, get in touch.

Book a consultation: 30-minute AI compliance assessment

Or start with the AI Compliance Check. Two minutes, and you know where action is needed.


Jose Lugo is a CISSP-certified AI compliance consultant based in Germany. He helps tax advisors, law firms, and financial services firms deploy AI tools in compliance with GDPR. Learn more at joselugo.de/en and Services.