Image "Cloaking" for Personal Privacy

Imatge
Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models (USENIX Security 2020)
Àmbits Temàtics

Fawkes: Protec­ting Perso­nal Privacy against Unaut­ho­ri­zed Deep Lear­ning Models.

Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Y. Zhao.

In Proce­e­dings of USENIX Secu­rity Sympo­sium 2020. ( Down­load PDF here )

Fawkes Source Code on Github, for deve­lop­ment.

 

2020 is a waters­hed year for machine lear­ning. It has seen the true arri­val of commo­di­zed machine lear­ning, where deep lear­ning models and algo­rithms are readily avai­la­ble to Inter­net users. GPUs are chea­per and more readily avai­la­ble than ever, and new trai­ning methods like trans­fer lear­ning have made it possi­ble to train power­ful deep lear­ning models using smaller sets of data.

But acces­si­ble machine lear­ning also has its down­si­des. A recent New York Times arti­cle by Kash­mir Hill profi­led clear­view.ai, an unre­gu­la­ted facial recog­ni­tion service that has down­lo­a­ded over 3 billion photos of people from the Inter­net and social media and used them to build facial recog­ni­tion models for milli­ons of citi­zens without their know­ledge or permis­sion. Clear­view.ai demons­tra­tes just how easy it is to build inva­sive tools for moni­to­ring and trac­king using deep lear­ning.

So how do we protect oursel­ves against unaut­ho­ri­zed third parties buil­ding facial recog­ni­tion models that recog­nize us where­ver we may go? Regu­la­ti­ons can and will help restrict the use of machine lear­ning by public compa­nies but will have negli­gi­ble impact on private orga­ni­za­ti­ons, indi­vi­du­als, or even other nation states with simi­lar goals.

The SAND Lab at Univer­sity of Chicago has deve­lo­ped Fawkes1, an algo­rithm and soft­ware tool (running locally on your compu­ter) that gives indi­vi­du­als the ability to limit how their own images can be used to track them. At a high level, Fawkes takes your perso­nal images and makes tiny, pixel-level chan­ges that are invi­si­ble to the human eye, in a process we call image cloa­king. You can then use these «cloa­ked» photos as you normally would, sharing them on social media, sending them to friends, prin­ting them or displaying them on digi­tal devi­ces, the same way you would any other photo. The diffe­rence, howe­ver, is that if and when some­one tries to use these photos to build a facial recog­ni­tion model, «cloa­ked» images will teach the model an highly distor­ted version of what makes you look like you. The cloak effect is not easily detec­ta­ble by humans or machi­nes and will not cause errors in model trai­ning. Howe­ver, when some­one tries to iden­tify you by presen­ting an unal­te­red, «unclo­a­ked» image of you (e.g. a photo taken in public) to the model, the model will fail to recog­nize you.

Fawkes has been tested exten­si­vely and proven effec­tive in a vari­ety of envi­ron­ments and is 100% effec­tive against state-of-the-art facial recog­ni­tion models (Micro­soft Azure Face API, Amazon Rekog­ni­tion, and Face++). We are in the process of adding more mate­rial here to explain how and why Fawkes works. For now, please see the link below to our tech­ni­cal paper, which will be presen­ted at the upco­ming USENIX Secu­rity Sympo­sium, to be held on August 12 to 14.

The Fawkes project is led by two PhD students at SAND Lab, Emily Wenger and Shawn Shan, with impor­tant contri­bu­ti­ons from Jiayun Zhang (SAND Lab visi­tor and current PhD student at UC San Diego) and Huiying Li, also a SAND Lab PhD student. The faculty advi­sors are SAND Lab co-direc­tors and Neuba­uer Profes­sors Ben Zhao and Heat­her Zheng.

1The Guy Fawkes mask, a la V for Vendetta.

In addi­tion to the photos of the team cloa­ked above, here are a couple more exam­ples of cloa­ked images and their origi­nals. Can you tell which is the origi­nal? (Cloa­ked image of the Queen cour­tesy of TheVerge).