EU Parliament sends a global message to protect human rights from AI

Imatge
Àmbits Temàtics

Today, the Inter­nal Market Commit­tee (IMCO) and the Civil Liber­ties Commit­tee (LIBE) commit­tees took seve­ral impor­tant steps to make this land­mark legis­la­tion more people-focu­sed by banning AI systems used for biome­tric survei­llance, emotion recog­ni­tion and predic­tive poli­cing. Disap­poin­tingly, the MEPs stop­ped short of protec­ting the rights of migrants.

Today, May 11, the IMCO and LIBE commit­tees of the Euro­pean Parli­a­ment voted to put people first in the AI Act. This vote comes at a crucial time for the global regu­la­tion of AI systems and is a massive win for our funda­men­tal rights as the Parli­a­ment heeded to the demands of diverse civil soci­ety voices.

Work on the EU AI Act star­ted in 2020, and EDRi’s network and part­ners have been pushing for a people-first, funda­men­tal rights-based appro­ach from the begin­ning.

The Parli­a­ment is sending a globally signi­fi­cant message to govern­ments and AI deve­lo­pers with its list of bans, siding with civil soci­ety’s demands that some uses of AI are just too harm­ful to be allo­wed. Unfor­tu­na­tely, the Euro­pean Parli­a­ment’s support for peoples’ rights stops short of protec­ting migrants from AI harms.

Sarah Chan­der, Senior Policy Advi­ser, EDRi

 

C

MEPs bring down the hammer against unac­cep­tably risky AI systems

The Euro­pean Parli­a­ment’s lead commit­tees send a clear signal that certain uses of AI are simply too harm­ful to be allo­wed, inclu­ding predic­tive poli­cing systems, many emotion recog­ni­tion and biome­tric cate­go­ri­sa­tion systems, and biome­tric iden­ti­fi­ca­tion in public spaces. These systems present severe thre­ats to funda­men­tal rights, and perpe­tu­ate syste­ma­tic discri­mi­na­tion against alre­ady margi­na­li­sed groups, inclu­ding racial mino­ri­ties.

We are deligh­ted to see Members of the Euro­pean Parli­a­ment (MEPs) step­ping up to prohi­bit so many of the prac­ti­ces that amount to biome­tric mass survei­llance. With this vote, the EU shows it is willing to put people over profits, free­dom over control, and dignity over dysto­pia.

Ella Jaku­bowska, Senior Policy Advi­sor, EDRi

 

MEPs have heeded the warning of over 80 civil soci­ety groups and tens of thou­sands of suppor­ters in the Reclaim Your Face campaign, elec­ting to put a stop to many of the key prac­ti­ces which amount to biome­tric mass survei­llance (BMS). This is a signi­fi­cant victory in the fight against prac­ti­ces that violate our privacy and dignity, and turn our public spaces into places of suspi­cion and the suppres­sion of our demo­cra­tic rights and free­doms.

In parti­cu­lar, this ban covers all real-time and most post remote biome­tric iden­ti­fi­ca­tion (RBI) in public spaces, discri­mi­na­tory biome­tric cate­go­ri­sa­tion and emotion recog­ni­tion in unac­cep­tably risky sectors. This is a histo­ric step to protect people in the EU from many BMS prac­ti­ces by both state and by private actors.

Whilst we welcome these steps, in Europe, RBI alre­ady is reality in law and prac­tice we will conti­nue to advo­cate at EU level and in every member state to end all BMS prac­ti­ces which chill our rights and parti­ci­pa­tion in public life.

Push for trans­pa­rency, accoun­ta­bi­lity, and the right to redress

A key demand of civil soci­ety has been to require that all actors rolling out high risk AI (‘deployers’) to be more trans­pa­rent and accoun­ta­ble about where and how they use certain AI systems. MEPs have agreed that deployers must perform a funda­men­tal rights impact assess­ment before making use of AI systems.

Howe­ver, MEPs only require that public autho­ri­ties and ‘gate­ke­e­pers’ (large compa­nies) need to publish the results of these assess­ments. This is an arbi­trary distinc­tion and over­sight, offe­ring less public infor­ma­tion when compa­nies use high risk systems.

In addi­tion, trans­pa­rency requi­re­ments have been added for ‘foun­da­ti­o­nal models’ or large language models sitting behind systems like Chat GPT, inclu­ding an obli­ga­tion to show the compu­ting power requi­red.

Signi­fi­cant steps have also been taken to empo­wer people affec­ted by the use of AI systems, inclu­ding a requi­re­ment to notify and provide expla­na­ti­ons to people who are affec­ted by AI-based deci­si­ons or outco­mes, and reme­dies for when rights have been viola­ted.

 

Linge­ring concerns on defi­ni­tion of ‘high risk’ and AI in migra­tion

There is a danger that the safe­guards the Parli­a­ment has put in place against risky AI systems will be compro­mi­sed by the propo­sed chan­ges to the risk clas­si­fi­ca­tion process in Arti­cle 6 of the AI Act.

The change provi­des a large loop­hole for AI deve­lo­pers (with an incen­tive to under-clas­sify) to argue that they should not be subject to legis­la­tive requi­re­ments. The chan­ges propo­sed to Arti­cle 6 pave the way for legal uncer­tainty, frag­men­ta­tion, and ulti­ma­tely, risks under­mi­ning the EU AI Act. We will conti­nue pushing against these loop­ho­les favou­ring industry actors over people’s rights.

Further­more, the Euro­pean Parli­a­ment has stop­ped short of protec­ting the rights of migrants from discri­mi­na­tory survei­llance. The MEPs failed to include in the list of prohi­bi­ted prac­ti­ces where AI is used to faci­li­tate ille­gal push­backs, or to profile people in a discri­mi­na­tory manner. Without these prohi­bi­ti­ons the Euro­pean Parli­a­ment is paving the way for panop­ti­con at the EU border.

Unfor­tu­na­tely, the Parli­a­ment is propo­sing some very worrying chan­ges rela­ting to what counts as 'high-risk’ AI. With the chan­ges in the text, deve­lo­pers will be able to decide if their system is 'signi­fi­cant’ enough to be consi­de­red high risk, a major red flag for the enfor­ce­ment of this legis­la­tion.

Sarah Chan­der, Senior Policy Advi­sor, EDRi

 

C

What’s Next

A plenary vote with all MEPs is expec­ted to take place in June, which will fina­lise the Parli­a­ment’s posi­tion on the AI Act. After that, we will enter a period of inter-insti­tu­ti­o­nal nego­ti­a­ti­ons with the Member States before this regu­la­tion can be passed and become EU law. The broad civil soci­ety coali­tion will conti­nue cente­ring people in these nego­ti­a­ti­ons.