Considerations on the EU proposals to regulate artificial intelligence (AI)


A draft EU regulation “on a European approach for artificial intelligence,” (the “Draft”) was leaked a few days ago. If adopted, it would further define a common European framework for the use and exploitation of AI, supplementing previous European Parliament (EP) and European Commission (EC) proposals.

Considerations on the EU proposals to regulate artificial intelligence (AI)
April 21, 2021

A draft EU regulation “on a European approach for artificial intelligence,” (the “Draft”) was leaked a few days ago. If adopted, it would further define a common European framework for the use and exploitation of AI, supplementing previous European Parliament (EP) and European Commission (EC) proposals.

The Draft specifically addresses the use and placing on the market of high-risk AI systems in the EU. It also lays down harmonized transparency standards for AI systems interacting with (i) natural persons and (ii) software used to generate or manipulate image, audio or video content. The Draft is still preliminary, so its final version could change significantly.

On October 20, 2020, the EP published several resolutions, together constituting the first proposal for an AI legal framework after the EC White Paper on Artificial Intelligence (discussed in this blog entry).

See below a summary classifying the EP resolutions on AI for a better understanding of the situation:

Ethical aspects (EP resolution 2020/2012(INL)).

  • The proposal seeks respect for human dignity, fundamental rights and self-determination. It also intends to prevent harm and promote fairness, inclusion and transparency, removing biases and discrimination.
  • The proposal considers that AI technologies should be human-centered, imposing specific obligations on high-risk AI systems, which must be restricted and carefully listed in the relevant regulations. AI systems will be classified as high-risk based on the following objective criteria: (i) the technologies’ ability to cause damage or breach fundamental rights and safety rules; (ii) their specific use or purpose; and (iii) the sector where they are deployed. This list will be reviewed periodically.
  • The proposal covers various scenarios and provides for an “ethical responsibility test” to be passed by any high-risk companies intending to use AI systems. It is a prior, impartial and regulated assessment by an external public body based on specific and predefined criteria.

Civil liability (EP resolution 2020/2014(INL)).

  • The EP recommends adapting the regulation on product liability (the Product Liability Directive) and on product safety (Product Safety Directive).
  • There are various factors related to AI technologies justifying these adjustments that (i) must be considered; and (ii) make AI systems different from other products in the market, including their complexity, connectivity, potential lack of transparency, vulnerability, capacity for self-learning and degree of autonomy.
  • The proposal emphasizes that AI technologies have no legal personality and are only aimed at serving humanity. Under the liability principles, whoever creates, maintains or interferes with AI systems must be accountable for any damage arising from their activity and subject to the proposed standards.
  • The proposal allocates liability among all actors in the value chain (including developers, manufacturers, programmers and operators). A significant development is the differentiation between frontend and backend operators. Theoretically, frontend operators decide on the use of AI systems, but backend operators could have a high degree of control if they qualify as “producers” under article 3 of the Product Liability Directive. If so, they could be primarily liable.
  • The proposal recommends reversing the burden of proof for the damage caused in specific cases, including:

(i) A strict liability regime for operators of high-risk AI systems, held liable for any damage caused by physical or virtual activities, devices or processes driven by AI systems. Operators may not exclude their liability arguing that they were diligent or that damage was caused by an activity driven by their AI system. Operators will not be liable in case of force majeure.

(ii) A fault-based liability regime for operators of non-high-risk systems, held liable for the damage caused (unless they can prove that the damage was caused without fault based on specific grounds). Operators may not exclude their liability arguing that damage was caused by an activity, device or autonomous process driven by their AI system.

(iii) A joint and several liability regime if there is more than one operator of an AI-system.

Intellectual and industrial property (EP resolution 2020/2015(INI)).

  • The EP recommends differentiating between AI-assisted human creations and AI-generated creations.
  • The current intellectual and industrial property framework remains applicable to AI-assisted human creations, the author being the right holder.
  • However, the EP considers that AI-generated creations should not be subject to copyright protection, in order to safeguard the principle of originality (linked to the author’s personality and human nature).
  • According to the EP, AI-generated creations should be protected to promote investment and improve legal certainty. The EP recommends granting copyright protection for this “creation” to the natural person lawfully editing and making it available (as long as the underlying technology’s designer/s do not object).

On January 20, 2021, the EP published a proposal for a resolution on AI and the interpretation and application of international law in civil and military uses. Below is a summary of the aspects covered by the proposal:

Military uses and human oversight (EP resolution 2020/2013(INI)).

  • The EP supports systems allowing for a high degree of human control over AI systems so we are always able to correct, stop or disable them in case of unexpected behaviors, accidental interventions, cyber attacks or interference by third parties with AI-based technology.
  • The EP argues that lethal autonomous weapon systems (LAWS) (i) should only be used as a last resort; and (ii) are only lawful if they are subject to strict human control. Systems without any human control (“human off the loop”) or oversight must be banned without exceptions.
  • The proposal recommends promoting a global framework governing the use of AI for military purposes together with the international community.

AI in the public sector (EP resolution 2020/2013(INI)).

  • The increased use of AI systems in health care and justice should never replace human contact. We should all be entitled to (i) know if a decision is made by an AI system; and (ii) a second opinion.
  • In case of health care uses of AI (e.g., robot-assisted surgery, smart prosthetics and predictive medicine), it is necessary to protect patients’ personal data and the principle of equal treatment.
  • AI technologies can expedite judicial proceedings and allow for more rational decisions. However, final judicial decisions must be (i) made by humans; (ii) strictly subject to human verification; (iii) subject to due process.

Mass video surveillance and deepfakes (EP resolution 2020/2013(INI)).

  • Regulators are very concerned about the threats to human rights and state sovereignty posed by AI technologies.
  • The proposal requests to ban public authorities from using highly intrusive social scoring applications (to control and classify citizens) because they are a serious threat to fundamental rights.
  • The proposal also raises concerns about deepfake technologies, allowing for increasingly realistic photo, audio and video forgeries that could be used to blackmail, generate fake news, or undermine public trust and influence public opinion. The proposal requests that all deepfake material and any other realistically made synthetic videos be labelled as “not original.”

For now, these are regulatory proposals, and the final version of the Draft could change significantly. On April 15, 2021, several members of the EP sent a letter to the EC objecting to the mass surveillance framework adopted in the Draft.

We will wait for the EC decision to know the final regulatory framework. The final EC decision is expected in late April 2021, so we will pay attention and report any developments on this blog.

Authors: Adaya Esteban, Octavi Oliu and Claudia Morgado

April 21, 2021