EP Resolution on police use of Artificial Intelligence

2021-10-21T15:24:00
European Union

The European Parliament again focuses the spotlight on police use of artificial intelligence in new decision

EP Resolution on police use of Artificial Intelligence
October 21, 2021

On October 6, 2021, European Parliament resolution on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters was passed, with 377 votes in favor, 248 votes against and 62 abstentions. The resolution brings together a number of different European Parliament (EP) resolutions published in recent months, which we address in this blog post. In this case, stressing the widespread use of artificial intelligence (AI) in policing and the prospect that it may lead to more objective decision making, the EP warned of the risks of this technology when used for law enforcement and by the judiciary.

Although the EP also recognizes that the use of AI can offer substantial benefits in efficiency and accuracy, MEPs highlight the risks this technology poses, such as discrimination and interference with fundamental rights of individuals. These risks are present in any situation in which AI is used but may be exacerbated in criminal matters.

This is precisely why the EP stated that the use of AI applications must be classified as high risk when it could have a significant impact on people’s lives. This classification follows the draft proposal on artificial intelligence (Proposed AI Regulation) published on April 21, discussed in this blog post. The draft proposal prohibits the use of certain AI applications by the police, and proposes a specific legal framework based on risk control and management the use is not prohibited but still poses a high risk.

Following the proposal and various prior resolutions, the EP stresses that AI systems must be designed in such a way that they protect and benefit all members of society to avoid having any negative effects. The decisions they make should be transparent and be able to be explained, while striving not to have harmful effects and always respecting the fundamental rights and freedoms of individuals. More specifically, the EP asks that the algorithms used in these systems be explainable, transparent (transparency around source data), traceable and verifiable (how the system reaches a certain conclusion can be verified), to ensure that the results of those algorithms can be understood by all users. Likewise, it states that the police must only buy tools and systems whose algorithms can be audited, recommending open source software wherever possible.

Likewise, the EP proposes a legal obligation to prevent the use of AI technologies in mass surveillance systems because it does not meet the principles of necessity and proportionality. The Proposed AI Regulation suggests that authorities ban its use in publicly accessible spaces (with some potentially questionable exceptions). The EP highlights that certain non-EU countries have adopted these systems and, in its opinion, they interfere disproportionately with the fundamental rights of individuals.

However, it does acknowledge the use of facial recognition technology by authorities, although it reaffirms that as a minimum, this use must comply with the requirements of data minimization, data accuracy, storage limitation, data security and accountability, as well as being lawful, fair and transparent, and following a specific, explicit and legitimate purpose that is clearly defined in law. It also notes that facial recognition technology is not at present as reliable in a forensic context as DNA or fingerprints. It calls for a permanent ban on the use of automated analysis or recognition in publicly accessible spaces of other human features, such as fingerprints, DNA, voice and other signals, as well as a moratorium on the rollout of facial recognition systems with identification functions in law enforcement, until the technical standards can be considered fully compliant, except when identifying crime victims.

The EP uses the example of Clearview AI, a database of more than 2 billion images taken from social media and other websites, mentioned in this blog, reiterating the use of that service by law enforcement would not be compatible with EU data protection rules.

The EP also expressed its concern for a number of research projects such as iBorderCtrl, which profiles travelers on the basis of a computer-automated interview taken by the traveler's webcam before the trip, and an artificial intelligence-based analysis of 38 microgestures, which has already been tested in Greece, Hungary and Latvia. MEPs also supported a ban on AI-enabled mass scale scoring of individuals, as recommended by the Commission’s High-Level Expert Group on AI and included in the Proposed AI Regulation.

We will keep a watchful eye on these proposals and the adjustments made in the Proposed AI Regulation, and will report back on the blog.


October 21, 2021