Claude, Anthropic, and the Microsoft Contract: What This Means for Your DPIA

Anthropic became a Microsoft 365 Copilot subprocessor in January 2026. If employees also use Claude directly, you have two exposure paths. What your DPIA needs to cover.

Anthropic, the company behind Claude, has held a dual role since January 2026. They sell their own AI tool. And they process data for Microsoft 365 Copilot as a subprocessor.

That sounds abstract. It is not. If your organization uses Copilot, Anthropic is already in your data flow. If employees also use Claude directly, you have two separate exposure paths. Your DPIA needs to cover both.

Claude as a direct tool

Let me start with the straightforward part. Claude comes in several tiers, and the GDPR conditions vary significantly.

The free version uses your inputs to train the model. That is in the terms of service. No DPA available. Not suitable for business use with personal data.

Claude Pro ($20/month) offers a training opt-out. Better, but still no standard DPA. The contractual basis required by Article 28 GDPR is missing here too.

Starting with Claude Team and the API, things change. Anthropic provides a DPA. Your data is not used for training. That is a contractual commitment, not just a toggle.

EU data residency is only partially addressed though. Anthropic does not operate its own infrastructure in the EU. Processing runs through cloud providers in the US, covered by Standard Contractual Clauses.

I compared Claude alongside other AI tools in my GDPR comparison. The full breakdown is there.

The Microsoft subprocessor angle

This is where it gets relevant for most organizations, even if they have never used Claude directly.

Since January 2026, Anthropic appears on Microsoft’s official subprocessor list. That means Microsoft routes certain Copilot requests to Anthropic models. Not all requests. But some.

Here is the key point: Anthropic is explicitly excluded from the EU Data Boundary. Microsoft established an EU Data Boundary to keep data within the EU. But for subprocessors like Anthropic, that boundary does not apply.

When an employee in your firm opens Copilot in Word and asks it to summarize a document, that request might be routed to an Anthropic model. The data leaves the EU. Not because Microsoft wanted it that way, but because the subprocessor does not offer EU infrastructure.

This is not a hypothetical scenario. This is the current technical architecture.

Two paths, one DPIA

Now it gets complicated in practice. Picture a mid-sized firm.

Path one: The firm has rolled out Microsoft 365 Copilot. Through the subprocessor chain, data flows to Anthropic. This is the official path, covered by the Microsoft contract, but with the EU Data Boundary exception.

Path two: Some employees use Claude directly in their browser. Because they know it from personal use, because a colleague recommended it, because it produces better results for certain tasks. This is the shadow AI path. No contract, no DPA, possibly with training data usage.

Two different legal bases. Two different risk assessments. Both need to be documented in one DPIA.

In practice, most DPIAs for Copilot do not account for the Anthropic subprocessor. And shadow AI is rarely captured at all. Put those together and you have a DPIA that exists on paper but does not reflect the actual data flows.

What your DPIA needs to include

If you have a DPIA for Copilot (and you need one), the following items need to be added since January 2026.

Data flow diagram: Draw the Anthropic path. Microsoft 365 Copilot processes a request. Part of it gets forwarded to Anthropic. Data leaves the EU. That needs to be visible in the diagram.

Risk assessment: Evaluate the risk of non-EU processing by Anthropic. What data types could be affected? What is the likelihood? What safeguards exist (SCCs in the Microsoft contract)?

Subprocessor register: Anthropic needs to be in your internal register. Not just on the Microsoft list. In your own documentation. Type of processing, location, contractual basis.

TOM documentation: What technical and organizational measures apply to the Anthropic processing? You need to be able to answer that question. Even if the measures primarily come from Microsoft.

For the shadow AI path (employees using Claude directly), you also need a clear statement in your AI acceptable use policy. Is Claude Free/Pro allowed for business use or not? If yes, under what conditions? If no, how do you enforce it?

My take on Claude

I will be direct about this: Claude is a capable tool. Anthropic has solid data protection practices at the higher tiers (Team, API). No training on customer data, DPA available, transparent documentation.

But the free version and Pro without a DPA are not suitable for business use with personal data. That applies to Claude the same way it applies to ChatGPT Free or Gemini consumer. No DPA, no business use. The rule is the same for every tool.

And the subprocessor angle affects you even if you have never actively deployed Claude. If you use Copilot, Anthropic is already part of your processing chain. That is not an argument against Copilot. It is an argument for keeping your DPIA current.

How this connects to Copilot oversharing

The oversharing issue and the subprocessor issue are connected. If Copilot searches too much data (because SharePoint permissions are too broad) and some of those requests get routed to Anthropic (because Anthropic is a subprocessor), then data that the user should not have accessed in the first place might leave the EU.

That is not Anthropic’s problem. That is a configuration issue on your side. But it illustrates why a permissions audit before a Copilot rollout is not optional.

Next step

Your DPIA needs to reflect the current subprocessor chain. Microsoft updates that list regularly. Anthropic was added in January 2026. Who comes next is not predictable.

My Managed Compliance Service includes quarterly DPIA reviews. Subprocessor changes like this get caught before they become an audit problem.

Book a 30-minute call | Learn about my services


Jose Lugo is a CISSP-certified AI compliance consultant who configures Microsoft 365 environments for regulated firms in Germany. He understands both the Copilot architecture and the GDPR requirements for subprocessor chains.