Last updated: March 2026
What Does “Privacy-Friendly” Actually Mean for AI?
GDPR-compliant AI for business is not a marketing term — it is a technical and legal requirement. The GDPR foundation consists of two articles that apply directly to AI systems.
Art. 25 GDPR (Privacy by Design and Privacy by Default) requires that data protection is built into the system architecture from the outset, not added as an afterthought. For AI deployments this means: minimal data collection, clear purpose limitation, and privacy-friendly default settings. If a chatbot forwards all conversations for model improvement by default, that constitutes an Art. 25 violation — unless explicit consent has been obtained.
Art. 28 GDPR (Data Processing) applies as soon as an AI provider processes personal data on behalf of the company. In that case, a DPA (Data Processing Agreement) is mandatory. According to the state data protection authorities, the DPA must govern purpose specification, right to issue instructions, TOMs (technical and organisational measures), sub-processors, and audit rights. A DPA alone is not sufficient, however: the company, as controller, remains responsible for verifying actual compliance.
Only the combination of both — technical architecture and contractual safeguards — constitutes truly privacy-friendly AI deployment. According to a Bitkom study from 2025 surveying 603 German businesses, 69 per cent feel that data protection makes training and deploying AI models more difficult. This is not an argument against data protection, but an argument for well-considered architectural decisions from the start.
Three Operating Models Compared: SaaS, Private Cloud EU, On-Premise
The choice of operating model is the most important data protection decision when deploying AI. It determines who has access to your data, under which legal framework it is processed, and what control options you have.
| Criterion | SaaS (e.g. ChatGPT, Copilot) | Private Cloud EU | On-Premise / Local LLMs |
|---|---|---|---|
| Data storage location | Provider’s servers (often USA) | EU data centre (e.g. Hetzner, IONOS) | Own hardware on company premises |
| DPA required | Yes (if available) | Yes (with hosting provider) | No (no external processing) |
| Third-country transfer | Frequent (USA, Standard Contractual Clauses) | None | None |
| Model control | None (provider decides) | High (flexible model selection) | Complete |
| Use of data for training | Depends on contract, often unclear | No (self-operated) | No |
| Logging / auditability | Limited | Fully configurable | Fully configurable |
| Typical use cases | General text tasks, non-sensitive data | Internal knowledge bases, HR, legal | Highly sensitive data, strict compliance |
| Infrastructure effort | Low | Medium | High (GPU hardware, maintenance) |
SaaS solutions warrant particular caution: even when a European data storage location is promised, the provider may route support access or model requests through US infrastructure. The European Court of Justice made clear in the Schrems II ruling that Standard Contractual Clauses alone, without accompanying technical measures, are insufficient when US authorities can access data.
GDPR Checklist for AI Deployment
Before an AI system goes into production, a company should have verified five points. This list covers the most common gaps that arise during data protection audits.
1. DPA (Data Processing Agreement)
A DPA pursuant to Art. 28 GDPR must be in place for every AI provider that processes personal data. Pay particular attention to the sub-processor clause: many providers forward requests to third parties (e.g. Azure OpenAI, AWS Bedrock), which must also be covered by the DPA.
2. TOMs (Technical and Organisational Measures)
Encryption of data in transit and at rest, access controls, backups and incident response processes must be documented. For self-hosted solutions, the responsibility lies entirely with the company.
3. Purpose Limitation
AI systems may only process data for the specified purpose. If a system is used for customer correspondence, it must not simultaneously be used for staff appraisals. Purposes must be documented in the records of processing activities (Art. 30 GDPR).
4. Access Control Concept
Who is permitted to query which knowledge spaces and data sources? A recruiter should have no access to financial data — not even indirectly via an AI that can access all internal documents. Role-based access control is not an optional convenience feature; it is a data protection obligation.
5. Logging and Auditability
Which user submitted which query at what time, and which data was processed? Without logging, neither a data breach can be reconstructed nor can a deletion obligation (Art. 17 GDPR) be verifiably fulfilled.
EU AI Act from 2026: What Changes for Data Protection?
The EU AI Act has applied since August 2024. Full applicability for high-risk AI systems takes effect from August 2026. Three points are relevant for the data protection framework within companies.
Risk classification: AI systems in the HR domain (recruiting, performance assessment) and in customer credit assessment are classified as high-risk AI. These systems are subject to stricter documentation, transparency and monitoring obligations. An AI system that pre-qualifies candidate profiles is therefore no longer an internal tool — it is a regulated system.
Interface with GDPR: The AI Act and GDPR are not competing frameworks but complementary ones. The AI Act requires fundamental rights impact assessments for high-risk systems, which are substantively similar to the Data Protection Impact Assessment (DPIA) under Art. 35 GDPR. Companies that already have robust GDPR governance can use that documentation as a basis. According to a KPMG analysis of the EU AI Act, combining both frameworks is the central compliance challenge for businesses in 2026.
National implementation: On 11 February 2026, the German Federal Cabinet approved the draft AI Market Surveillance and Innovation Promotion Act (KI-MIG), which establishes the German supervisory structures for the AI Act. Companies should verify whether their AI systems fall into risk categories that require registration in the EU database for high-risk AI.
Case Study: Recruitment Agency with GDPR-Compliant CV Handling
From a project by Schauersberger Software.
A recruitment agency processes incoming application documents daily — including CVs containing dates of birth, addresses, and in some cases health information. The goal: automated extraction, structuring, and matching against open positions.
The data protection problem with a standard SaaS tool: CVs contain personal data that would be processed on US servers, without clear purpose limitation and without deletion deadlines. A third-country transfer would be barely justifiable for applicant data.
The solution was a Private Cloud on a German Hetzner server with a locally operated language model for document processing. The architecture in detail: CVs are uploaded, parsed by the local model, and written in structured form to an internal database. The model has no internet access and no connection to external APIs. Recruiters receive a structured summary with a relevant qualification profile for each application. Deletion deadlines are technically enforced: after six months without activity, applicant data is automatically deleted.
The result: manual pre-qualification is largely eliminated. The Data Protection Impact Assessment under Art. 35 GDPR was feasible because all data flows are internal and fully documented.
Decision Matrix: Which Operating Model Fits When?
The question is not which operating model is “most secure”, but which one suits the actual data situation, the available budget, and the compliance requirements.
SaaS with EU Data Residency
When it makes sense
General text tasks without personal data, no access to internal documents, no sensitive industries (no healthcare, no HR files, no legal). Fast to get started, minimal infrastructure effort.
Private Cloud EU
When it makes sense
Internal knowledge bases containing personal data, industries with elevated protection requirements, teams of 10 or more users with regular document access. A good balance of control, performance and cost.
On-Premise / Local LLMs
When it makes sense
Highly sensitive data (patient records, salary information, state secrets), strict compliance requirements, no external connectivity desired or permitted. Highest infrastructure effort and upfront investment.
A useful rule of thumb: as soon as an AI system accesses documents you would not comfortably send to an external service provider by email, standard SaaS is ruled out. Private Cloud EU is the pragmatic middle ground for most SMEs: full control over architecture and data flows, without the need to operate your own hardware.
The final decision always depends on the specific use case, however. A tax adviser processing client data has different requirements from a marketing agency producing blog posts. Both can be handled in a privacy-friendly way — but with different architectures.