In response to our article on the privacy-compliant of AI systems we were asked whether guidelines for the legally compliant use of AI (within organizations) also exist in Germany. As a follow-up to our previous article, we are, therefore, more than glad to also summarize the recommendations from the guidance issued by the Conference of the Independent Data Protection Authorities of the Federal and State Governments (Data Protection Conference – DSK) on artificial intelligence and data protection (only available in German) , even though this guidance has already been available since May 2024. With this guidance the DSK intends to provide organizations with a practical framework for selecting, implementing, and using AI systems in compliance with the EU General Data Protection Regulation (“GDPR”).
Clear Objectives and Purpose Limitation
Before using AI, organizations must clearly define what the application will be used for. Purpose limitation is a cornerstone of data protection law, as it determines whether the processing of personal data is actually necessary. Public authorities must also ensure that deployment is within their statutory mandate.
At the same time, organizations must verify whether a particular use case is legally permissible. Certain practices are prohibited under the forthcoming EU AI Act, such as social scoring or real-time biometric surveillance. These uses are not open for experimentation.
There are also scenarios in which AI can operate without processing personal data, for example in purely scientific analyses or technical debugging. In such cases, data protection law does not apply. However, organizations must still carefully assess whether personal reference could arise indirectly.
Legal Basis and Limits of Automated Decision-Making
Whenever personal data is processed, a legal basis is required. Depending on the context, different provisions apply – labor law, healthcare contracts, or Article 6 GDPR. For sensitive data categories (e.g., health or religion), Article 9 GDPR imposes even stricter conditions.
Crucially, decisions with legal effect may not be made solely by AI. Article 22 GDPR requires human final decision-making. Responsibility cannot simply be shifted to an algorithm. Only in narrowly defined cases, such as automated administrative acts with clear legal requirements, may exceptions apply.
Closed vs. Open Systems
The guidance distinguishes between closed and open systems:
-
Closed systems run locally or in controlled environments. Data remains within a protected domain and is not shared with providers.
-
Open systems, such as publicly accessible cloud services, pose higher risks: data leaves the secure domain, may be reused for training, or transferred to third countries.
From a data protection perspective, closed systems are preferable, especially when dealing with sensitive or confidential information.
Transparency and Data Subject Rights
One of the greatest challenges in AI deployment is explainability. Yet, controllers must be able to describe the logic, scope, and potential consequences of AI-supported processes in understandable terms. This is especially relevant for automated decision-making. Visual or simplified explanations can be useful tools.
Equally important are the rights of data subjects. Individuals are entitled to:
-
Correction of inaccurate data,
-
Erasure under Article 17 GDPR,
-
Restriction, portability, and objection rights.
These rights often require technical solutions, such as fine-tuning models or ensuring erased data cannot re-enter the system. Filters can help prevent harmful outputs, but they do not replace true deletion.
Internal Organization and Responsibilities
For AI to be GDPR-compliant, clear responsibilities are necessary.
-
An organization operating a system for its own purposes is usually the sole controller.
-
When using an external system (e.g., cloud-based), the relationship is often one of data processing on behalf of the controller under Article 28 GDPR.
-
If multiple parties jointly determine purposes and means, a joint controllership exists and must be contractually defined under Article 26 GDPR.
Without clear internal rules, employees may use AI on their own, potentially leading to privacy violations. The DSK therefore strongly recommends internal policies or works agreements to govern AI use.
Data Protection Impact Assessment and Technical Security
In many cases, AI usage requires a Data Protection Impact Assessment (DPIA) because it typically involves high risks to individuals’ rights. Organizations must evaluate the type, scope, purpose, and context of processing. Supervisory authorities also publish lists of processing activities that always require a DPIA.
Additionally, the principle of “Privacy by Design and by Default” applies. Systems should be designed so that privacy protections are built in from the start. For instance, default settings should ensure that inputs are not stored for training and chat histories are not retained indefinitely.
IT security is equally critical. AI systems must meet established standards for confidentiality, integrity, and availability to prevent unauthorized access or data leaks. The German Federal Office for Information Security (BSI) provides detailed guidance on this topic.
Protecting and Training Employees
Employers should provide dedicated accounts and devices for professional use of AI. Employees should not be forced to use personal accounts, as this could create unnecessary profiling. Instead of personal logins, functional addresses should be used.
Training and awareness are also vital. Staff must understand which data they can input, what risks exist, and how to handle AI-generated results. Without proper awareness, uncontrolled and potentially unlawful use is likely.
Practical Use: Inputs, Outputs, and Risks
In daily operations, caution is paramount. Even seemingly harmless prompts may involve personal data. For example, asking for a job reference for a “customer advisor at Company X” could indirectly identify an individual. Similarly, outputs may contain personal information even if the input was anonymized. Controllers must then determine whether a legal basis exists and whether data subjects must be informed.
Particular care is required with special categories of personal data. Health information, political beliefs, or biometric identifiers may only be processed under narrow exceptions.
Moreover, AI outputs must never be assumed accurate. LLMs may generate false or outdated information. Results must therefore be critically verified before further use.
Finally, systems must be checked for discrimination risks. Biased outputs that disadvantage certain groups are not only unethical but unlawful. For example, an AI suggesting that only male candidates should be hired would clearly violate anti-discrimination law (AGG in Germany) and cannot serve as a basis for decisions.
Conclusion: Responsible AI Use Is Essential
The DSK guidance makes one point very clear: AI can provide valuable support, but only if its deployment respects the fundamental principles of data protection law. Organizations must:
-
define the purpose clearly,
-
establish a valid legal basis,
-
favor closed systems,
-
ensure transparency and enforce data subject rights,
-
and continuously assess risks, accuracy, and fairness.
The message is unambiguous: AI must not become a lawless domain. Human accountability, clear governance, and robust technical safeguards are indispensable for maintaining trust in these technologies and protecting the rights of individuals.
This article is intended to convey general thoughts on the topic presented. It should not be relied upon as legal advice. It is not an offer to represent you, nor is it intended to create an attorney-client relationship. References to “MAYRFELD”, “the law firm”, and “legal practice” are to one or more of the MAYRFELD members. No individual who is a partner, shareholder, employee or consultant of MAYRFELD (whether or not such individual is described as a “partner”) accepts or assumes responsibility, or has any liability, to any person in respect to this communication. Any reference to a partner or is to a member, employee or consultant with equivalent standing and qualifications of MAYRFELD. The purpose of this communication is to provide information as to developments in the law. It does not contain a full analysis of the law nor does it constitute an opinion of MAYRFELD on the points of law discussed. You must take specific advice on any particular matter which concerns you.
For more information about MAYRFELD Rechtsanwälte PartG mbB, please visit us at www.mayrfeld.com.

Deutsch