The European Parliament supports the ban on remote biometric monitoring The TechCrunch

Facebook
Twitter

The European Parliament voted in favor of outlawing biometric mass monitoring completely.

Face recognition and other AI-powered remote surveillance technologies have significant consequences for fundamental freedoms and rights like privacy, but they are already starting to be used in public in Europe.

The automated recognition of people in public places should be prohibited permanently, according to MEPs, in order to preserve privacy and human dignity. People should only be tracked when they are accused of committing a crime, they added.

A restriction on the use of private facial recognition databases like the controversial AI system created by US startup Clearview (which is already in use by some police forces in Europe ) has also been demanded by the parliament, along with a ban on predictive policing based on behavioral data.

Additionally, MEPs want to outlaw social scoring systems that evaluate a citizen’s trustworthiness based on their personality or behavior.

Back in April , the EU’s executive submitted a draft law to control high risk applications of AI technology, which included a restriction on social scoring and a general ban on remote biometric surveillance in public places.

However, the Commission’s proposal was criticized by civil society, the European Data Protection Board and European Data Protection Supervisor and some MEPs quickly warned for not going far enough.

The parliament as a whole has now made it clear it too wants tougher safeguards for basic rights.

report on Artificial Intelligence in criminal law lawmakers sent a clear message about what they will accept in the next discussions between EU institutions that will define the specifics of the Artificial Intelligence Act in a resolution voted yesterday night with a vote of 377:248 in support of the LIBE committees.

The Commission is urged to: in the relevant clause on remote biometric surveillance.

calls on the Commission to stop funding biometric research, deployment, or programs that are likely to lead to indiscriminate mass surveillance in public spaces. implement, through legislative and non-legislative means, and if necessary through infringement proceedings, a ban on any processing of biometric data, including facial images, for law enforcement purposes.

The resolution also addresses algorithmic bias, urging for human oversight and robust legislative protections to stop discrimination by AI, particularly in the context of law enforcement and border crossing.

MEPs concurred that final judgments must always be made by human operators and argued that people under the surveillance of AI-powered devices must have access to recourse.

The MEPs added that algorithms should be transparent, traceable, and sufficiently documented to ensure that fundamental rights are upheld when using AI-based identification systems, which have been shown to incorrectly identify LGBTI people, seniors, and women at higher rates than majority ethnic groups.

They demanded that government agencies utilize open-source software whenever possible in order to be more transparent.

The iBorderCtrl project, according to MEPs, should be abandoned. controversial EU-funded research project is working on developing a sophisticated lie-detector based on detecting facial expressions.

Rapporteur Petar Vitanov (Sandamp;D, BG) made the following comment in response: “Fundamental rights are unconditional.” We are asking for a moratorium on the use of facial recognition technology in law enforcement for the first time ever because the technology has shown to be useless and frequently produces discriminatory outcomes. We categorically oppose any processing of biometric data that results in widespread surveillance, as well as AI-based predictive policing. The people of Europe stand to gain greatly from this.

For a response on the vote, the Commission has been contacted.

Another extremely contentious area where automation is already being used, and where the parliament’s resolution similarly asks for a prohibition, is the use of AI to support court decisions. Automation runs the risk of solidifying and amplifying structural prejudices in criminal justice systems.

Fair Trials , a global human rights organization, applauded the vote and hailed it as a significant victory for basic rights and equality in the digital era.

Facebook
Twitter

Related topics:

you may like

Leave a Reply

Your email address will not be published.

a spin-off of Goldman Sachs Juven will provide sizable cheques to support African fast-growing firms. The TechCrunch

Facebook
Twitter