Post by Peter Lotz, M.C.J. (NYU) Attorney, Attorney-At-Law (N.Y.) MAYRFELD Rechtsanwälte & Attorneys-At-Law

 

Introduction

With the rapid spread of Artificial Intelligence (AI) across various areas of application, so too grows the responsibility to design these technologies in compliance with data protection laws. AI systems often process large amounts of personal data, which must be protected in accordance with the EU General Data Protection Regulation (GDPR). In its (German-language only) guidance on recommended technical and organizational measures for the development and operation of AI systems (as of June 2025) (“Guidance”), the German Conference of Independent Data Protection Authorities of the Federal and State Governments (“Datenschutzkonferenz”- DSK) outlines specific technical and organizational measures (TOMs) to be implemented throughout the lifecycle of an AI system. The aim is to systematically safeguard both legal data protection requirements and the rights and freedoms of individuals.

The German Data Protection Conference comprises the Federal Commissioner for Data Protection and Freedom of Information (BfDI) and the data protection commissioners of the 16 federal states. The purpose of the Data Protection Conference is to enable a uniform application of data protection regulations across federal and state levels. While the guidance documents issued by the DSK are not legally binding, they are professionally grounded and facilitate consistent interpretation and application of data protection law in practice. Thus, they also serve as an indicator for the private sector regarding how German data protection authorities are likely to interpret and apply the law in specific situations.

This article highlights the main contents of the Guidance, structured according to the lifecycle phases of an AI system—design, development, deployment, as well as operation and monitoring—and focuses on implementing the seven data protection assurance goals: transparency, data minimization, availability, confidentiality, integrity, intervenability, and non-linkage.

1. Legal Framework for Data Protection

The GDPR focuses on protecting natural persons when their personal data is processed. AI systems present particular challenges because they often involve adaptive, data-intensive processes whose decision-making logic is not always easily comprehensible.

1.1 AI System and AI Model

The Guidance defines an AI system as a machine-based system designed for varying degrees of autonomous operation. Once deployed, it can be adaptive and processes inputs (e.g., data) to generate outputs such as predictions, content, recommendations, or decisions in relation to explicit or implicit goals. These outputs can influence physical or virtual environments. An AI system is based on one or more AI models. These range from narrowly focused systems (e.g., medical pattern recognition) to those with general-purpose applications.

According to the Guidance, an AI model is the technical foundation of an AI system. It is created by training on large data sets, enabling the system to detect patterns, make decisions, or perform other tasks. The data protection-relevant feature of AI models lies in how training data is handled and where it comes from (e.g., via crawling or scraping). AI models may originate from third parties and be integrated into other systems or customized for specific uses.

An AI system uses one or more AI models to extract information from data and generate decisions or recommendations. Therefore, the model is a central functional component of the system. The Guidance does not cover AI systems that clearly do not involve any personal data. Neither does it comment on data protection issues arising from the collection of data sets (e.g., via crawling, scraping) and from integrating existing AI models or systems into new ones—whether through the use or specialization of third-party models or integration via an interface.

1.2 Data Protection by Design and Standard Data Protection Model

The principle of “Data Protection by Design” requires developers to take data protection into account already during the design phase of an AI system. The use of personal data in AI systems requires clear purpose limitation, a suitable legal basis, and a risk assessment. In certain cases, a data protection impact assessment (DPIA) under Art. 35 GDPR is required.

To systematically implement the GDPR, the DSK uses the Standard Data Protection Model (SDM) as a practical framework. The SDM translates legal requirements into the seven assurance goals listed in the Introduction above, which must be met through specific technical and organizational measures.

2. Lifecycle Phases and Assurance Goals

The Guidance assumes that an AI system passes through different phases during development and subsequent operation, which together represent its lifecycle. It distinguishes four phases: (1) Design, (2) Development, (3) Deployment, and (4) Operation and Monitoring. These phases may vary in relevance depending on the type of AI system.

2.1 Design Phase: Laying the Foundation for Data Protection

The design phase involves defining the objectives, system architecture, and data strategy of the AI system. At this stage, it must be assessed whether the use of personal data is necessary, or if anonymized or synthetic alternatives can be used. Key principles include:

Transparency: According to the DSK, documentation is key. This includes the processing purpose, legal basis, data collection methodology, dataset structure and origin (e.g., with “Datasheets for Datasets”), system components, and criteria for selecting AI algorithms.

Data Minimization: Only personal data necessary for achieving the AI system’s purpose may be collected. The volume, categories, and sources of data must be carefully selected and justified.

Non-Linkage: Unauthorized combination of personal data that could reveal sensitive attributes must be avoided. Indirect inferences via so-called “proxy attributes” (e.g., postal code as an indicator of origin) should also be critically examined.

Intervenability: Processes to fulfill data subject rights (e.g., access, deletion) must be defined from the design phase. Raw data must remain traceable and modifiable, e.g., through metadata management.

Availability: AI systems must be designed to allow prompt and continuous processing. This includes the selection of appropriate database systems.

Integrity: Measures to ensure data quality, such as plausibility checks, detection of data tampering or bias, must be implemented.

Confidentiality: Techniques like differential privacy, encryption, or privacy-preserving learning (e.g., federated learning) should be considered to reduce re-identification or data leakage risks.

2.2 Development Phase: Processing, Training, and Validation

In the development phase, raw data is processed, and AI models are trained and tested. This phase is data-intensive and thus especially relevant for data protection. Key principles include:

Transparency: The selection and use of algorithms, training, validation, and test data must be documented in a traceable manner. Data processing locations, access rights, and retention periods must be clearly defined.

Data Minimization: Only relevant data should be processed. For modular AI systems, ensure that each component only receives the data intended for it.

Unlinkability: Training must strictly follow the defined purpose. Models must not learn unintended correlations (e.g., inferring ethnicity from images).

Intervenability: Data subject rights must be considered during training. Models should be designed to adapt efficiently to deletion requests (e.g., via machine unlearning).

Availability: The training infrastructure must be secured against disruptions (e.g., outages). Backup strategies and redundancy can help prevent system failures.

Integrity: Protection against data poisoning and backdoor attacks is essential. Training data must be accurate, complete, and representative.

Confidentiality: There is a real risk that AI models might “memorize” personal training data and output it unintentionally. Regular testing and technical countermeasures (e.g., regularization, privacy audits) are necessary.

2.3 Deployment Phase: Delivery and Configuration

This phase concerns software distribution, deployment, and initial configuration of AI systems. Personal data may also be involved—for instance, with non-parametric models that require access to their training data. Key DSK guidelines include:

Transparency: Users of AI systems must be informed about how the system works, its decision-making criteria, and their rights (e.g., the right to object under Art. 22 GDPR). Documentation should be understandable even to laypeople.

Data Minimization: Only essential data and model components may be deployed. For example, with parametric models, training data is typically not required and should not be included.

Confidentiality: In offline deployments, all personal data ends up on local devices—this carries higher risks. Cryptographic methods and access restrictions must be applied.

2.4 Operation and Monitoring Phase: Continuous Data Protection

In this phase, the AI system is in productive use. Monitoring, adjustments, and possibly learning components are key. The guidance outlines the following requirements:

Transparency: Decisions—especially those with legal effects—must be traceable and documented. Model parameters, system behavior, and updates must be securely logged.

Data Minimization: If the AI system outputs unnecessary data, the model must be adapted. Feedback loops (e.g., user input) may only be used for model improvement after a data protection review.

Unlinkability: During operation, no new, impermissible data linkages should arise. New data sources must be carefully reviewed.

Intervenability: Technical measures such as manual review processes, indicators for result uncertainty, or filtering systems support human oversight. Implementing rights to rectification, deletion, and restriction of processing may require retraining or “machine unlearning.”

Integrity: The system must adapt to changes in its application context (e.g., new legal requirements). Regular testing (e.g., red teaming) ensures the system’s robustness and reliability.

Confidentiality: APIs and interfaces must not be exploited to extract sensitive data. Access control, logging, and role-based permission systems are essential.

Conclusion

According to the DSK, privacy-compliant AI development is not an either-or between innovation and regulation—it should represent a standard of quality that fosters long-term trust, security, and acceptance. The DSK’s Guidance provides German authorities with a roadmap for implementing data protection-compliant AI and also serves the business community by indicating how German regulators are likely to interpret data protection laws. Crucially, the DSK views technical and organizational measures not as a one-off hurdle, but as a continuous process throughout the AI system’s lifecycle—ensuring that AI’s potential is sustainably harnessed while protecting individual rights and freedoms.

 

This article is intended to convey general thoughts on the topic presented. It should not be relied upon as legal advice. It is not an offer to represent you, nor is it intended to create an attorney-client relationship. References to “MAYRFELD”, “the law firm”, and “legal practice” are to one or more of the MAYRFELD members. No individual who is a partner, shareholder, employee or consultant of MAYRFELD (whether or not such individual is described as a “partner”) accepts or assumes responsibility, or has any liability, to any person in respect to this communication. Any reference to a partner or is to a member, employee or consultant with equivalent standing and qualifications of MAYRFELD. The purpose of this communication is to provide information as to developments in the law. It does not contain a full analysis of the law nor does it constitute an opinion of MAYRFELD on the points of law discussed. You must take specific advice on any particular matter which concerns you.

For more information about MAYRFELD Rechtsanwälte PartG mbB, please visit us at www.mayrfeld.com.

About the author Peter Lotz, M.C.J. (NYU) Attorney, Attorney-At-Law (N.Y.) MAYRFELD Rechtsanwälte & Attorneys-At-Law
Peter Lotz is a partner of MAYRFELD. He has been counseling for over 20 years domestic and foreign Fortune 500 companies as well as SMEs in connection with the cross-border developemt, acquisition, licensing and commercialization of novel technologies.
Show all posts