2024-01-26T17:21:00
Portugal
New Data Protection Paradigm in the AI Era
AI: Data Protection Action Plan
January 26, 2024

AI’s data protection challenges

Now that the daily activity paradigm of organizations has changed, with artificial intelligence (“AI”) becoming part and parcel of their business dealings, the use of this technology has generated a series of new data protection challenges.

AI processes information to learn, adapt, and make predictions or recommendations. However, the vast amount of data required to train its algorithms raises concerns about the privacy and security of this information.

Although not all AI systems need to feed on personal data, the information collected in many cases may be directly or indirectly linked to the processing of an individual’s personal data. Consequently, although the official publication of the AI Act is still pending (though expected to happen soon), we already know that it will establish that AI systems must be developed in compliance with the obligations and principles of the General Data Protection Regulation (“GDPR”).

Therefore, the right to privacy and data protection is recognized from the outset. These rights, in particular, must be upheld throughout the life cycle of the AI system by complying with the principles of data minimization and data protection by design and by default.

Currently, we must consider that the standard practice in training AI systems is to use multiple sources, particularly web scraping, which consists of extracting data—including personal data—from the internet.

The lack of direct interaction between the entity that scrapes the data (the data controller in this case) and the data subject makes compliance with GDPR obligations difficult, including (i) compliance with the right to information, which requires transparency in obtaining personal data; (ii) the lack of a lawful basis for collecting and processing data; (iii) non-compliance with the terms and conditions of certain websites (which are allowed to specify what can and cannot be extracted); and (iv) potential data breaches.

Although the GDPR exempts data controllers from providing information when such a measure is disproportionate, the restrictive interpretation by supervisory authorities creates a number of uncertainties.

Essential compliance requirements from a data protection and AI perspective

To address the existing risks, we must consider the key compliance requirements from a data protection and AI perspective:

·        Transparency: Organizations must ensure transparency, for the user, of the terms for the processing of their personal data, promoting more transparent and reliable AI systems and considering the impact on data subjects.

·        Lawful basis: Organizations must analyze and consider the applicable lawful basis for processing personal data, ensuring that it falls within the provisions of the GDPR. AI systems must process personal data lawfully, which may limit the availability of certain datasets for the development, training and operation of these systems.

·        Purpose limitation: As AI systems frequently discover new applications and correlations in existing data, their functionalities must be adapted so that they do not exceed this limitation. However, this may pose a challenge to expanding its capabilities and applications.

·        Data minimization: While AI systems can process substantial amounts of data, the GDPR establishes that they must only collect the data that is strictly necessary for the stated purpose. Therefore, organizations should implement more selective approaches and focus on minimizing the personal data processed.

·        Exercise of rights: Organizations must establish the necessary mechanisms to comply with requests to exercise rights in line with the scale of processing carried out as a result of using AI systems. AI can be used to make automated decisions based on data and algorithms, which can raise concerns related to ethics, equality and fairness.

·        Data protection impact assessment: Before implementing AI systems, organizations must identify the risks associated with adopting and using them, based on the fundamental principles and requirements of data protection. To this end, they should assess the need for a data protection impact assessment (“DPIA”). Following the DPIA, and depending on the outcome, they should (i) continue processing personal data as planned, (ii) define and implement risk mitigation measures, (iii) conduct a prior consultation with the Portuguese Data Protection Authority (CNPD), or (iv) rethink the AI-powered platform development project.

Action plan for organizations

For organizations that develop or use AI systems that process personal data, we recommend the following action plan:

  1. Clearly define the lawful basis for processing personal data
  2. Apply data protection principles from the very beginning
  3. Develop transparency and accountability mechanisms
  4. Establish an effective and appropriate procedure for managing and complying with requests to exercise data protection rights
  5. Ensure employee training and raise awareness of data protection issues
  6. Conduct the necessary data protection impact assessments
  7. Establish a governance and continuous monitoring model.
January 26, 2024