Guidelines on the requirements for audits of processing involving artificial intelligence (“IA”)

2021-02-05T17:42:00

On January 12, the Spanish Data Protection Agency (“AEPD”) published new guidelines on the requirements that audits of processing involving artificial intelligence must meet (“Guidelines for IA audits”).

Guidelines on the requirements for audits of processing involving artificial intelligence (“IA”)
February 5, 2021

On January 12, the Spanish Data Protection Agency (“AEPD”) published new guidelines on the requirements that audits of processing involving artificial intelligence must meet (“Guidelines for IA audits”).

This document is framed within other more ambitious guidelines addressing the doubts that AI raises in terms of data protection, the Guidelines for Adapting Processing Involving IA to the GDPR, published by the AEPD on February 13, 2020.

Both documents are aimed at data controllers who process data with AI, as well as data processors and developers who support this processing.

While the previous guidelines introduced the relationship between AI and data protection throughout its life cycle, highlighting the obligations arising from each stage and the importance that these technologies guarantee compliance with data protection regulations, this new document is mainly focused on offering guidelines and a list of objective criteria to be specifically included in audits of data processing involving AI.

Its scope of action is, therefore, much more specific, as it only refers to the audit as a possible tool to assess regulatory compliance to guarantee that the final product including AI complies with data protection regulations and is, therefore, transparent, predictable and controllable; in other words, commonly used in relation to AI, that the AI is trustworthy.

In summary:

These latest guidelines therefore reflect what, according to the AEPD, an audit on AI in data protection should involve, its objectives and controls. Given their uniqueness with respect to other processing or their significance in these cases, the following objectives, for which the AEPD establishes different controls, should be highlighted:

  • Inventory the audited algorithm
  • Identify responsibilities
  • Comply with the principle of transparency Identify the purposes
  • Analyze the principle of proportionality (suitability, proportionality and necessity of the processing) and, if necessary, the completion of an impact assessment
  • Analyze the limits on collecting and storing data
  • Adapt the theoretical base models or the methodological framework Identify the basic architecture of the AI component
  • Ensure data quality
  • Control possible biases
  • Verify and validate the actions performed and their results on the AI component, its performance, consistency, stability and robustness (e.g., whether the AI’s behavior has been assessed in cases of unforeseen use or environment, whether the AI’s type of learning and adaptability to new data have been assessed or what the factors whose variation could affect the properties of the AI or its compliance are), traceability (e.g., whether there are monitoring and supervisory mechanisms, version control, a reassessment procedure, recording of incidents, etc.), or security (e.g., whether risk analyses have been conducted, available standards and good practices are followed, privacy by design and by default, etc.).

Many of these objectives and criteria, such as bias control or stability, traceability or security verification, are cross-cutting targets that must be analyzed from several perspectives to assess and control other issues that are also relevant and discussed in relation to AI components, such as discriminatory biases, objectivity, trust or transparency.

However, many other matters lie outside the scope of application and must be included in the auditors’ assessment, such as ethics, efficiency and the allocation of responsibilities. These controls are also independent from any other obligations that may remain for data controllers, whether arising from data protection regulations (e.g., impact assessments) or from other regulations.

In any case, this list is extensive and its application will ultimately depend on the AI component in question and the level of impact and risk that it entails for the rights and freedoms of the data subjects.

We will continue to follow this long (and interesting) path towards a Trustworthy AI step by step.

Author: Adaya María Esteban Ruiz

February 5, 2021