Clearview AI aims to put almost every human in facial recognition database

Inves­tor pitch said 100 billion photos would make almost everyone «iden­ti­fi­a­ble.»

The contro­ver­sial facial recog­ni­tion company Clear­view AI repor­tedly told inves­tors that it aims to collect 100 billion photos—­sup­po­sedly enough to ensure that almost every human will be in its data­base.

«Clear­view AI is telling inves­tors it is on track to have 100 billion facial photos in its data­base within a year, enough to ensure 'almost everyone in the world will be iden­ti­fi­a­ble, ' accor­ding to a finan­cial presen­ta­tion from Decem­ber obtai­ned by The Washing­ton Post, » the Post repor­ted today. There are an esti­ma­ted 7.9 billion people on the planet.

The Decem­ber presen­ta­tion was part of an effort to obtain new funding from inves­tors, so 100 billion facial images is more of a goal than a firm plan. Howe­ver, the presen­ta­tion said that Clear­view has alre­ady racked up 10 billion images and is adding 1.5 billion images a month, the Post wrote. Clear­view told inves­tors it needs anot­her $50 million to hit its goal of 100 billion photos, the Post repor­ted:

The company said that its «index of faces» has grown from 3 billion images to more than 10 billion since early 2020 and that its data collec­tion system now ingests 1.5 billion images a month.

With $50 million from inves­tors, the company said, it could bulk up its data collec­tion powers to 100 billion photos, build new products, expand its inter­na­ti­o­nal sales team and pay more toward lobbying govern­ment policy­ma­kers to «deve­lop favo­ra­ble regu­la­tion.»

Clear­view collects photos from Inter­net

As the Post noted, «Clear­view has built its data­base by taking images from social networks and other online sour­ces without the consent of the websi­tes or the people who were photo­grap­hed. Face­book, Google, Twit­ter, and YouTube have deman­ded the company stop taking photos from their sites and delete any that were previ­ously taken. Clear­view has argued its data collec­tion is protec­ted by the First Amend­ment.»

The incre­ase in photos could be paired with an expan­ded busi­ness model. Clear­view «wants to expand beyond scan­ning faces for the police, saying in the presen­ta­tion that it could moni­tor 'gig economy’ workers and is rese­ar­ching a number of new tech­no­lo­gies that could iden­tify some­one based on how they walk, detect their loca­tion from a photo or scan their finger­prints from afar, » the Post wrote.

We contac­ted Clear­view about the presen­ta­tion and recei­ved a short state­ment from Clear­view foun­der and CEO Hoan Ton-That. «Clear­view AI’s data­base of publicly avai­la­ble images is lawfully collec­ted, just like any other search engine, inclu­ding Google. It is used by law enfor­ce­ment for after-the-crime inves­ti­ga­ti­ons to assist in iden­tifying perpe­tra­tors of crimes, » he told Ars.

Ton-That told the Post that the company has collec­ted photos from «milli­ons of diffe­rent websi­tes» on the public Inter­net. Ton-That said the company hasn’t deci­ded whet­her to sell its facial recog­ni­tion service to nongo­vern­ment orga­ni­za­ti­ons.

Clear­view “prin­ci­ples will be upda­ted, as needed”

Clear­vi­ew’s website inclu­des a state­ment of prin­ci­ples. «Clear­view AI currently offers its solu­ti­ons to only one cate­gory of custo­mer—­go­vern­ment agen­cies and their agents, » the state­ment says. «It limits the uses of its system to agen­cies enga­ged in lawful inves­ti­ga­tive proces­ses direc­ted at crimi­nal conduct, or at preven­ting speci­fic, subs­tan­tial, and immi­nent thre­ats to people’s lives or physi­cal safety.»

In his state­ment to the Post, Ton-That argued that «every photo in the data set is a poten­tial clue that could save a life, provide justice to an inno­cent victim, prevent a wrong­ful iden­ti­fi­ca­tion, or exone­rate an inno­cent person.» Howe­ver, the company’s appro­ach could change along with its busi­ness model. «Our prin­ci­ples reflect the current uses of our tech­no­logy. If those uses change, the prin­ci­ples will be upda­ted, as needed, » Ton-That said.

Twit­ter, Face­book, and YouTube orde­red Clear­view AI to stop scra­ping their sites in early 2020. Police used Clear­view tech­no­logy to iden­tify and arrest people accu­sed of violence or destruc­tion of property during Black Lives Matter protests later that year. After the Janu­ary 6, 2021, attack on the US Capi­tol, Ton-That, said, «it is gratifying that Clear­view AI has been used to iden­tify the Capi­tol rioters who attac­ked our great symbol of demo­cracy.»

Clear­view lost court ruling

Clear­view is facing vari­ous privacy lawsuits and lost an impor­tant ruling Monday in a case over whet­her the company viola­ted the Illi­nois Biome­tric Infor­ma­tion Privacy Act by collec­ting and using facial images without people’s consent. A fede­ral judge «rejec­ted Clear­vi­ew’s First Amend­ment defense, denied the company’s motion to dismiss, and allo­wed the lawsuits to move forward, » the Elec­tro­nic Fron­tier Foun­da­tion wrote yester­day. «This is an impor­tant victory for our privacy over Clear­vi­ew’s profits.»

A Vice report yester­day quoted Ton-That as saying that Airbnb, Lyft, and Uber have «expres­sed inter­est» in using Clear­view facial recog­ni­tion «for the purpo­ses of consent-based iden­tity veri­fi­ca­tion, since there are a lot of issues with crimes that happen on their plat­forms.» Howe­ver, Ton-That said, «there are no current plans to work with» those compa­nies, and all three compa­nies told Vice that they have no plans to use Clear­view.