Legal challenges of Deepfakes: the European response

2021-10-13T16:45:00
Other countries

Concept and possible initiatives for the regulation of deepfakes by the EU institutions

Legal challenges of Deepfakes: the European response
October 13, 2021

Deepfakes are videos or recordings where a person’s face or voice is manipulated and recreated using artificial intelligence (AI) to create fake videos or recordings. Deepfakes are often very convincing and it is hard to tell the difference between them and genuine content, as evidenced by many examples online.

As discussed in this Deepfaking: no te fíes de todo lo que ves, deepfakes pose various legal challenges that are far from insignificant, e.g., their (i) detection and use by digital service providers; (ii) implications on image rights, the right to honor and personal and family privacy; and even (iii) public order and public interest implications, related to the use of deepfaking for political manipulation.

Deepfake technology does not necessarily have negative implications, since it has, e.g., many possible medical applications and can be used as a creative tool in film production and other audiovisual arts. However, these uses are also controversial and challenging from a legal perspective. The use of deepfake technology raises no concerns when deepfakes are not credible or not intended to manipulate viewers, distorting reality to the point that it can be confused with the manipulated recording. Deepfakes neither raise concerns if there is a lawful use of the depicted persons’ own image or if they have given their consent.

However, there are concerns in the European Union (EU) about misuses. The European Parliament (EP) recently published a study on tackling the negative impact of deepfake technology. The EP found three categories of risks arising from deepfakes: (i) psychological harm; (ii) economic harm; and (iii) societal harm. It is also worth considering that, unlike most tools for manipulation, deepfakes are really hard to detect. The mechanisms developed to detect them are not very accurate to say the least.

Currently, there is no national or EU regulation on deepfakes. In Spain, e.g., deepfakers can commit crimes against a person’s moral integrity or be charged with slander or libel, but there is no provision on deepfake technology or its use.

However, this may change very soon, since the EP study mentions possible initiatives that can be taken by EU bodies to address these issues. These initiatives include (i) adopting the new European AI Regulation (already discussed in this blog: Propuesta de reglamento de la UE sobre inteligencia artificial) and banning deepfakes or defining deepfake technology as high-risk within the framework of the Regulation; (ii) imposing specific legal requirements on software providers that create deepfake content; and (iii) increasing investments in software for detecting deepfaked content and education to raise awareness about the existence and risks of this technology.

In sum, although currently there is no clear regulatory framework for deepfakes, many of the greatest challenges they pose are not only legal but also technical. From a legal perspective, many of these challenges will be solved (at least partially) within the future European AI regulatory framework.

October 13, 2021