Proposal for an EU regulation on artificial intelligence

2021-06-03T16:28:00
European Union

As reported in previous blog entries, on April 21, the European Commission (EC) published the final version of the long-awaited proposal for a Regulation on the legal framework applicable to artificial intelligence (AI) systems. The proposal (i) regulates high-risk AI systems; and (ii) provides harmonized transparency rules for AI systems intended to interact with humans and used to generate or manipulate image, audio or video content.

Proposal for an EU regulation on artificial intelligence
June 3, 2021

As reported in previous blog entries, on April 21, the European Commission (EC) published the final version of the long-awaited proposal for a Regulation on the legal framework applicable to artificial intelligence (AI) systems. The proposal (i) regulates high-risk AI systems; and (ii) provides harmonized transparency rules for AI systems intended to interact with humans and used to generate or manipulate image, audio or video content.

Considering all the economic and social benefits arising from the use of AI, the EC promotes a safe and ethical use and development of AI systems, laying down certain rules aimed at mitigating certain risks and negative outcomes.

The regulation covers an extensive subject matter and a broad territorial scope. It comprises all the participants across the AI value chain (including suppliers, importers and distributors) and applies to those established (i) in the EU and, if the AI systems have effects within the EU, also (ii) in third countries.

The proposed regulation breaks down AI systems into four risk levels, imposing more or less stringent requirements based on their classification:

Classification:

  1. Prohibited AI systems. This category includes an exclusive list subject to periodical review. These AI systems are prohibited because they pose an inadmissible risk for safety, life or fundamental rights. The list includes AI systems capable of (i) distorting human behavior; (ii) making forecasts about groups to identify their vulnerabilities or distinct features; and (iii) allowing for real-time biometric identification or mass surveillance by authorities in public spaces. The systems under (iii) above are only allowed for law enforcement and subject to a judicial or administrative authorization. This authorization may be requested after using the AI system in “situations of urgency,” which can reopen the debate.

    2. High-risk AI systems.
    This category lists other AI systems that are not prohibited but entail a “high risk” for individual rights and freedoms, and that therefore should be subject to more stringent requirements ensuring a legal, ethical, robust and safe use. This is also an exclusive list subject to periodical review to adapt it to new technologies. This category comprises safety components for regulated sectors or critical infrastructure such as air transport, motor vehicle surveillance or railway transport. This category also includes AI systems used for biometric identification and categorization, recruitment, border control, law enforcement or to evaluate individuals’ credit scores.

    3. Low/medium-risk AI systems.
    This category includes AI systems not posing a high risk for rights and freedoms, comprising less sophisticated or invasive technologies such as virtual assistants, such as chatbots.

    4. Other AI systems.
    These AI systems are not subject to any specific requirements, but operators in the supply chain may adhere to voluntary compliance schemes. Therefore, these systems would be outside the scope of the regulation.

We summarize below the main requirements applicable to each category:

Main requirements:

  1. Prohibited AI systems:

These systems entail an inadmissible risk upfront. However, those used for real-time remote biometric identification in public spaces will be exceptionally allowed for law enforcement subject to a judicial or administrative authorization. This authorization may be requested after using the AI system in “situations of urgency,” which has given rise to a heated debate.

2. High-risk AI systems:

These AI systems will always be allowed if they (i) are subject to conformity assessments; and (ii) implement risk management throughout their entire life cycle. Every operator across the value chain would be subject to specific requirements, including:

  • Data governance: the data used should meet certain quality standards or be subject to monitoring or examination for possible biases.
  • Safety and human oversight: there should always be a natural person capable of controlling the system to mitigate potential risks.
  • Transparency obligations: there should be a description of the system’s operation and the AI supplier’s identity and information should be stated.
  • Registration in an EU database: registration should occur prior to placing the AI system on the market.
  • Passing the conformity assessment and obtaining the relevant certification: there will be mandatory technical specifications.

3. Low/medium-risk AI systems:

These systems would only be subject to a set of transparency obligations aimed at ensuring that users be aware of the systems’ operation, characteristics, and implications of their use.

4. Other AI systems:

If the proposed wording remains unchanged, these systems will be subject to voluntary self-regulation schemes such as voluntary codes of conduct. Some sectors disagree with this “open-ended regulation” proposal, supporting more regulation, even for the less sophisticated systems currently representing the largest share of AI systems on the market.

Penalties:

The regulation provides the following penalties in case of non-compliance:

  • Non-compliance with prohibited practices and data governance obligations by high-risk AI systems: up to €30 million or 6% of the offender’s total worldwide annual turnover for the previous financial year;
  • Non-compliance with any other requirements or obligations: up to €20 million or 4% of the offender’s total worldwide annual turnover for the previous financial year;
  • Providing incorrect, incomplete or misleading information to national bodies or authorities: up to €10 million or 2% of the offender’s total worldwide annual turnover for the previous financial year.

The EP and the Council will review and debate this proposed regulation and may suggest amendments. After its adoption, the regulation will be directly applicable in all EU Member States, allowing for a homogeneous enforcement.

This proposal takes into account the EP recommendations from October 20, 2020. These recommendations were the first package of a posible AI regulatory framework, discussed in this blog entry, comprising three EP resolutions including recommendations for the EC regarding ethical, intellectual property and civil liability aspects.

We will follow this proposed regulation’s development and modifications until its final adoption. It will undoubtedly be one of the most significant topics of 2021, so we will continue paying careful attention to it.

Authors: Claudia Morgado and Adaya Esteban

June 3, 2021