European Parliament Opposes Mass Surveillance

  • People need to have control over arti­fi­cial inte­lli­gence systems, and algo­rithms need to be open source
  • Prohi­bi­tion of private data­ba­ses for the iden­ti­fi­ca­tion of persons, the prepa­ra­tion of beha­vi­o­ral fore­casts by law enfor­ce­ment agen­cies and the assess­ment of citi­zens
  • Auto­ma­ted recog­ni­tion should not be used for border control or in public places

 

In order to combat discri­mi­na­tion and guaran­tee the right to privacy, the EP calls for strict safe­guards when AI instru­ments are used in law enfor­ce­ment.

In a reso­lu­tion adop­ted by 377 votes to 248, with 62 absten­ti­ons, MEPs poin­ted to the risk of algo­rith­mic bias in arti­fi­cial inte­lli­gence (AI) appli­ca­ti­ons and stres­sed the need for human over­sight and strong legal powers to prevent discri­mi­na­tion by AI, espe­ci­ally in the context of law enfor­ce­ment or border control. Human opera­tors must always make the final deci­si­ons, and enti­ties moni­to­red by AI-based systems must have access to redress, MEPs say.

Concerns of discri­mi­na­tion

Accor­ding to the text, iden­ti­fi­ca­tion systems based on arti­fi­cial inte­lli­gence are now more often mista­kenly iden­tifying mino­rity ethnic groups, LGBTI people, the elderly and women, which is parti­cu­larly worrying in the areas of law enfor­ce­ment and the judi­ci­ary. To ensure respect for funda­men­tal rights in the use of these tech­no­lo­gies, algo­rithms should be trans­pa­rent, trace­a­ble and suffi­ci­ently docu­men­ted, MEPs call. Where possi­ble, public autho­ri­ties should use AI with open source soft­ware to be more trans­pa­rent.

Contra­dic­tory tech­no­lo­gies

In order to respect privacy and human dignity, MEPs are pushing for a perma­nent ban on auto­ma­ted face recog­ni­tion in public, noting that citi­zens should only be moni­to­red when a crime is suspec­ted. Parli­a­ment calls for a ban on the use of private facial recog­ni­tion data­ba­ses (such as the Clear­View AI system, which is alre­ady in use) and fore­cas­ting based on beha­vi­o­ral data by the police.

MEPs no longer want to use the mass assess­ment of citi­zens by public autho­ri­ties with the help of AI, which they consi­der jeopar­di­zes the prin­ci­ples of non-discri­mi­na­tion and is not in line with EU funda­men­tal rights.

In conclu­sion, Parli­a­ment is concer­ned about the use of biome­tric data for the remote iden­ti­fi­ca­tion of people. For exam­ple, border control doors that use auto­ma­ted recog­ni­tion and the iBor­derCtrl project («inte­lli­gent lie detec­tor system» for the EU’s exter­nal borders) need to be shut down, MEPs say, urging the Commis­sion to launch infrin­ge­ment proce­e­dings against Member States, if neces­sary.

A quote

The rappor­teur Petar Vita­nov (S&D, Bulga­ria) stated: “Funda­men­tal rights are uncon­di­ti­o­nal. For the first time, we are calling for a mora­to­rium on the intro­duc­tion of facial recog­ni­tion systems in the context of law enfor­ce­ment, as tech­no­logy has been shown to be inef­fec­tive and often leads to discri­mi­na­tory results. We strongly oppose predic­ti­ons of poli­cing based on the use of arti­fi­cial inte­lli­gence, as well as any proces­sing of biome­tric data that leads to mass survei­llance. This is a huge victory for all Euro­pean citi­zens. "

 

Foto: euro­parl.europa.eu