How to destroy surveillance capitalism? por Cory Doctorow

Origi­nal post here

Editor’s Note: Survei­llance capi­ta­lism is everyw­here. But it’s not the result of some wrong turn or a rogue abuse of corpo­rate power — it’s the system working as inten­ded. This is the subject of Cory Docto­row’s new book, which we’re thri­lled to publish in whole here on OneZero. This is how to destroy survei­llance capi­ta­lism.

The net of a thou­sand lies

The most surpri­sing thing about the rebirth of flat Earthers in the 21st century is just how wides­pread the evidence against them is. You can unders­tand how, centu­ries ago, people who’d never gained a high-enough vantage point from which to see the Earth’s curva­ture might come to the common­sense belief that the flat-seeming Earth was, indeed, flat.

But today, when elemen­tary scho­ols routi­nely dangle GoPro came­ras from ballo­ons and loft them high enough to photo­graph the Earth’s curve — to say nothing of the unex­cep­ti­o­nal sight of the curved Earth from an airplane window — it takes a heroic effort to main­tain the belief that the world is flat.

Like­wise for white nati­o­na­lism and euge­nics: In an age where you can become a compu­ta­ti­o­nal geno­mics data­po­int by swab­bing your cheek and mailing it to a gene-sequen­cing company along with a modest sum of money, “race science” has never been easier to refute.

We are living through a golden age of both readily avai­la­ble facts and denial of those facts. Terri­ble ideas that have linge­red on the frin­ges for deca­des or even centu­ries have gone mains­tream seemingly over­night.

When an obscure idea gains currency, there are only two things that can explain its ascen­dance: Either the person expres­sing that idea has gotten a lot better at stating their case, or the propo­si­tion has become harder to deny in the face of moun­ting evidence. In other words, if we want people to take climate change seri­ously, we can get a bunch of Greta Thun­bergs to make eloquent, passi­o­nate argu­ments from podiums, winning our hearts and minds, or we can wait for flood, fire, broi­ling sun, and pande­mics to make the case for us. In prac­tice, we’ll probably have to do some of both: The more we’re boiling and burning and drow­ning and wasting away, the easier it will be for the Greta Thun­bergs of the world to convince us.

The argu­ments for ridi­cu­lous beli­efs in odious cons­pi­ra­cies like anti-vacci­na­tion, climate denial, a flat Earth, and euge­nics are no better than they were a gene­ra­tion ago. Indeed, they’re worse because they are being pitched to people who have at least a back­ground aware­ness of the refu­ting facts.

Anti-vax has been around since the first vacci­nes, but the early anti-vaxxers were pitching people who were less equip­ped to unders­tand even the most basic ideas from micro­bi­o­logy, and more­o­ver, those people had not witnes­sed the exter­mi­na­tion of mass-murde­ring dise­a­ses like polio, small­pox, and meas­les. Today’s anti-vaxxers are no more eloquent than their fore­be­ars, and they have a much harder job.

So can these far-fetched cons­pi­racy theo­rists really be succe­e­ding on the basis of supe­rior argu­ments?

Some people think so. Today, there is a wides­pread belief that machine lear­ning and commer­cial survei­llance can turn even the most fumble-tongued cons­pi­racy theo­rist into a sven­gali who can warp your percep­ti­ons and win your belief by loca­ting vulne­ra­ble people and then pitching them with A.I.-refi­ned argu­ments that bypass their rati­o­nal facul­ties and turn every­day people into flat Earthers, anti-vaxxers, or even Nazis. When the RAND Corpo­ra­tion blames Face­book for “radi­ca­li­za­tion” and when Face­book’s role in spre­a­ding coro­na­vi­rus misin­for­ma­tion is blamed on its algo­rithm, the impli­cit message is that machine lear­ning and survei­llance are causing the chan­ges in our consen­sus about what’s true.

After all, in a world where spraw­ling and inco­he­rent cons­pi­racy theo­ries like Pizza­gate and its succes­sor, QAnon, have wides­pread follo­wings, somet­hing must be afoot.

But what if there’s anot­her expla­na­tion? What if it’s the mate­rial circums­tan­ces, and not the argu­ments, that are making the diffe­rence for these cons­pi­racy pitch­men? What if the trauma of living through real cons­pi­ra­cies all around us — cons­pi­ra­cies among wealthy people, their lobbyists, and lawma­kers to bury incon­ve­ni­ent facts and evidence of wrong­doing (these cons­pi­ra­cies are commonly known as “corrup­tion”) — is making people vulne­ra­ble to cons­pi­racy theo­ries?

If it’s trauma and not conta­gion — mate­rial condi­ti­ons and not ideo­logy — that is making the diffe­rence today and enabling a rise of repul­sive misin­for­ma­tion in the face of easily obser­ved facts, that doesn’t mean our compu­ter networks are blame­less. They’re still doing the heavy work of loca­ting vulne­ra­ble people and guiding them through a series of ever-more-extreme ideas and commu­ni­ties.

Belief in cons­pi­racy is a raging fire that has done real damage and poses real danger to our planet and species, from epide­mics kicked off by vaccine denial to geno­ci­des kicked off by racist cons­pi­ra­cies to plane­tary melt­down caused by denial-inspi­red climate inac­tion. Our world is on fire, and so we have to put the fires out — to figure out how to help people see the truth of the world through the cons­pi­ra­cies they’ve been confu­sed by.

But fire­figh­ting is reac­tive. We need fire preven­tion. We need to strike at the trau­ma­tic mate­rial condi­ti­ons that make people vulne­ra­ble to the conta­gion of cons­pi­racy. Here, too, tech has a role to play.

There’s no shor­tage of propo­sals to address this. From the EU’s Terro­rist Content Regu­la­tion, which requi­res plat­forms to police and remove “extre­mist” content, to the U.S. propo­sals to force tech compa­nies to spy on their users and hold them liable for their users’ bad speech, there’s a lot of energy to force tech compa­nies to solve the problems they crea­ted.

There’s a criti­cal piece missing from the debate, though. All these solu­ti­ons assume that tech compa­nies are a fixture, that their domi­nance over the inter­net is a perma­nent fact. Propo­sals to replace Big Tech with a more diffu­sed, plura­lis­tic inter­net are nowhere to be found. Worse: The “solu­ti­ons” on the table today require Big Tech to stay big because only the very largest compa­nies can afford to imple­ment the systems these laws demand.

Figu­ring out what we want our tech to look like is crucial if we’re going to get out of this mess. Today, we’re at a cross­ro­ads where we’re trying to figure out if we want to fix the Big Tech compa­nies that domi­nate our inter­net or if we want to fix the inter­net itself by unshack­ling it from Big Tech’s stran­gle­hold. We can’t do both, so we have to choose.

I want us to choose wisely. Taming Big Tech is inte­gral to fixing the inter­net, and for that, we need digi­tal rights acti­vism.

Digi­tal rights acti­vism, a quar­ter-century on

Digi­tal rights acti­vism is more than 30 years old now. The Elec­tro­nic Fron­tier Foun­da­tion turned 30 this year; the Free Soft­ware Foun­da­tion laun­ched in 1985. For most of the history of the move­ment, the most promi­nent criti­cism leve­led against it was that it was irre­le­vant: The real acti­vist causes were real-world causes (think of the skep­ti­cism when Finland decla­red broad­band a human right in 2010), and real-world acti­vism was shoe-leat­her acti­vism (think of Malcolm Glad­well’s contempt for “click­ti­vism”). But as tech has grown more central to our daily lives, these accu­sa­ti­ons of irre­le­vance have given way first to accu­sa­ti­ons of insin­ce­rity (“You only care about tech because you’re shilling for tech compa­nies”) to accu­sa­ti­ons of negli­gence (“Why didn’t you fore­see that tech could be such a destruc­tive force?”). But digi­tal rights acti­vism is right where it’s always been: looking out for the humans in a world where tech is inex­o­rably taking over.

The latest version of this criti­que comes in the form of “survei­llance capi­ta­lism, ” a term coined by busi­ness profes­sor Shos­hana Zuboff in her long and influ­en­tial 2019 book, The Age of Survei­llance Capi­ta­lism: The Fight for a Human Future at the New Fron­tier of Power. Zuboff argues that “survei­llance capi­ta­lism” is a unique crea­ture of the tech industry and that it is unlike any other abusive commer­cial prac­tice in history, one that is “cons­ti­tu­ted by unex­pec­ted and often ille­gi­ble mecha­nisms of extrac­tion, commo­di­fi­ca­tion, and control that effec­ti­vely exile persons from their own beha­vior while produ­cing new markets of beha­vi­o­ral predic­tion and modi­fi­ca­tion. Survei­llance capi­ta­lism challen­ges demo­cra­tic norms and departs in key ways from the centu­ries-long evolu­tion of market capi­ta­lism.” It is a new and deadly form of capi­ta­lism, a “rogue capi­ta­lism, ” and our lack of unders­tan­ding of its unique capa­bi­li­ties and dangers repre­sents an exis­ten­tial, species-wide threat. She’s right that capi­ta­lism today thre­a­tens our species, and she’s right that tech poses unique challen­ges to our species and civi­li­za­tion, but she’s really wrong about how tech is diffe­rent and why it thre­a­tens our species.

What’s more, I think that her incor­rect diag­no­sis will lead us down a path that ends up making Big Tech stron­ger, not weaker. We need to take down Big Tech, and to do that, we need to start by correctly iden­tifying the problem.

Tech excep­ti­o­na­lism, then and now

Early critics of the digi­tal rights move­ment — perhaps best repre­sen­ted by campaig­ning orga­ni­za­ti­ons like the Elec­tro­nic Fron­tier Foun­da­tion, the Free Soft­ware Foun­da­tion, Public Know­ledge, and others that focu­sed on preser­ving and enhan­cing basic human rights in the digi­tal realm — damned acti­vists for prac­ti­cing “tech excep­ti­o­na­lism.” Around the turn of the millen­nium, seri­ous people ridi­cu­led any claim that tech policy matte­red in the “real world.” Claims that tech rules had impli­ca­ti­ons for speech, asso­ci­a­tion, privacy, search and seizure, and funda­men­tal rights and equi­ties were trea­ted as ridi­cu­lous, an eleva­tion of the concerns of sad nerds arguing about Star Trek on bulle­tin board systems above the strug­gles of the Free­dom Riders, Nelson Mandela, or the Warsaw ghetto upri­sing.

In the deca­des since, accu­sa­ti­ons of “tech excep­ti­o­na­lism” have only shar­pe­ned as tech’s role in every­day life has expan­ded: Now that tech has infil­tra­ted every corner of our life and our online lives have been mono­po­li­zed by a hand­ful of giants, defen­ders of digi­tal free­doms are accu­sed of carrying water for Big Tech, provi­ding cover for its self-inter­es­ted negli­gence (or worse, nefa­ri­ous plots).

From my pers­pec­tive, the digi­tal rights move­ment has remai­ned stati­o­nary while the rest of the world has moved. From the earli­est days, the move­ment’s concern was users and the tools­miths who provi­ded the code they needed to realize their funda­men­tal rights. Digi­tal rights acti­vists only cared about compa­nies to the extent that compa­nies were acting to uphold users’ rights (or, just as often, when compa­nies were acting so foolishly that they thre­a­te­ned to bring down new rules that would also make it harder for good actors to help users).

The “survei­llance capi­ta­lism” criti­que recasts the digi­tal rights move­ment in a new light again: not as alar­mists who overes­ti­mate the impor­tance of their shiny toys nor as shills for big tech but as serene deck-chair rear­ran­gers whose long-stan­ding acti­vism is a liabi­lity because it makes them inca­pa­ble of percei­ving novel thre­ats as they conti­nue to fight the last century’s tech batt­les.

But tech excep­ti­o­na­lism is a sin no matter who prac­ti­ces it.

Image for post

Image for post

Don’t beli­eve the hype

You’ve probably heard that “if you’re not paying for the product, you’re the product.” As we’ll see below, that’s true, if incom­plete. But what is abso­lu­tely true is that ad-driven Big Tech’s custo­mers are adver­ti­sers, and what compa­nies like Google and Face­book sell is their ability to convince you to buy stuff. Big Tech’s product is persu­a­sion. The servi­ces — social media, search engi­nes, maps, messa­ging, and more — are deli­very systems for persu­a­sion.

The fear of survei­llance capi­ta­lism starts from the (correct) presump­tion that everyt­hing Big Tech says about itself is probably a lie. But the survei­llance capi­ta­lism criti­que makes an excep­tion for the claims Big Tech makes in its sales lite­ra­ture — the breath­less hype in the pitches to poten­tial adver­ti­sers online and in ad-tech semi­nars about the effi­cacy of its products: It assu­mes that Big Tech is as good at influ­en­cing us as they claim they are when they’re selling influ­en­cing products to credu­lous custo­mers. That’s a mistake because sales lite­ra­ture is not a reli­a­ble indi­ca­tor of a product’s effi­cacy.

Survei­llance capi­ta­lism assu­mes that because adver­ti­sers buy a lot of what Big Tech is selling, Big Tech must be selling somet­hing real. But Big Tech’s massive sales could just as easily be the result of a popu­lar delu­sion or somet­hing even more perni­ci­ous: mono­po­lis­tic control over our commu­ni­ca­ti­ons and commerce.

Being watched chan­ges your beha­vior, and not for the better. It crea­tes risks for our social progress. Zuboff’s book featu­res beau­ti­fully wrought expla­na­ti­ons of these pheno­mena. But Zuboff also claims that survei­llance lite­rally robs us of our free will — that when our perso­nal data is mixed with machine lear­ning, it crea­tes a system of persu­a­sion so devas­ta­ting that we are helpless before it. That is, Face­book uses an algo­rithm to analyze the data it noncon­sen­su­ally extracts from your daily life and uses it to custo­mize your feed in ways that get you to buy stuff. It is a mind-control ray out of a 1950s comic book, wiel­ded by mad scien­tists whose super­com­pu­ters guaran­tee them perpe­tual and total world domi­na­tion.

What is persu­a­sion?

To unders­tand why you shouldn’t worry about mind-control rays — but why you should worry about survei­llance and Big Tech — we must start by unpac­king what we mean by “persu­a­sion.”

Google, Face­book, and other survei­llance capi­ta­lists promise their custo­mers (the adver­ti­sers) that if they use machine-lear­ning tools trai­ned on unima­gi­nably large data sets of noncon­sen­su­ally harves­ted perso­nal infor­ma­tion, they will be able to unco­ver ways to bypass the rati­o­nal facul­ties of the public and direct their beha­vior, crea­ting a stream of purcha­ses, votes, and other desi­red outco­mes.

The impact of domi­nance far exce­eds the impact of mani­pu­la­tion and should be central to our analy­sis and any reme­dies we seek.

But there’s little evidence that this is happe­ning. Instead, the predic­ti­ons that survei­llance capi­ta­lism deli­vers to its custo­mers are much less impres­sive. Rather than finding ways to bypass our rati­o­nal facul­ties, survei­llance capi­ta­lists like Mark Zucker­berg mostly do one or more of three things:

1. Segmen­ting

If you’re selling diapers, you have better luck if you pitch them to people in mater­nity wards. Not everyone who enters or leaves a mater­nity ward just had a baby, and not everyone who just had a baby is in the market for diapers. But having a baby is a really reli­a­ble corre­late of being in the market for diapers, and being in a mater­nity ward is highly corre­la­ted with having a baby. Hence diaper ads around mater­nity wards (and even pitch­men for baby products, who haunt mater­nity wards with baskets full of free­bies).

Survei­llance capi­ta­lism is segmen­ting times a billion. Diaper vendors can go way beyond people in mater­nity wards (though they can do that, too, with things like loca­tion-based mobile ads). They can target you based on whet­her you’re reading arti­cles about child-rearing, diapers, or a host of other subjects, and data mining can suggest unob­vi­ous keywords to adver­tise against. They can target you based on the arti­cles you’ve recently read. They can target you based on what you’ve recently purcha­sed. They can target you based on whet­her you receive emails or private messa­ges about these subjects — or even if you speak aloud about them (though Face­book and the like convin­cingly claim that’s not happe­ning — yet).

This is seri­ously creepy.

But it’s not mind control.

It doesn’t deprive you of your free will. It doesn’t trick you.

Think of how survei­llance capi­ta­lism works in poli­tics. Survei­llance capi­ta­list compa­nies sell poli­ti­cal opera­ti­ves the power to locate people who might be recep­tive to their pitch. Candi­da­tes campaig­ning on finance industry corrup­tion seek people strug­gling with debt; candi­da­tes campaig­ning on xenop­ho­bia seek out racists. Poli­ti­cal opera­ti­ves have always targe­ted their message whet­her their inten­ti­ons were hono­ra­ble or not: Union orga­ni­zers set up pitches at factory gates, and white supre­ma­cists hand out fliers at John Birch Soci­ety meetings.

But this is an inex­act and thus waste­ful prac­tice. The union orga­ni­zer can’t know which worker to appro­ach on the way out of the factory gates and may waste their time on a covert John Birch Soci­ety member; the white supre­ma­cist doesn’t know which of the Birchers are so delu­si­o­nal that making it to a meeting is as much as they can manage and which ones might be convin­ced to cross the country to carry a tiki torch through the stre­ets of Char­lot­tes­vi­lle, Virgi­nia.

Because targe­ting impro­ves the yields on poli­ti­cal pitches, it can acce­le­rate the pace of poli­ti­cal uphe­a­val by making it possi­ble for everyone who has secretly wished for the toppling of an auto­crat — or just an 11-term incum­bent poli­ti­cian — to find everyone else who feels the same way at very low cost. This has been criti­cal to the rapid crys­ta­lli­za­tion of recent poli­ti­cal move­ments inclu­ding Black Lives Matter and Occupy Wall Street as well as less savory players like the far-right white nati­o­na­list move­ments that marched in Char­lot­tes­vi­lle.

It’s impor­tant to diffe­ren­ti­ate this kind of poli­ti­cal orga­ni­zing from influ­ence campaigns; finding people who secretly agree with you isn’t the same as convin­cing people to agree with you. The rise of pheno­mena like nonbi­nary or other­wise noncon­for­ming gender iden­ti­ties is often charac­te­ri­zed by reac­ti­o­na­ries as the result of online brain­was­hing campaigns that convince impres­si­o­na­ble people that they have been secretly queer all along.

But the perso­nal accounts of those who have come out tell a diffe­rent story where people who long harbo­red a secret about their gender were embol­de­ned by others coming forward and where people who knew that they were diffe­rent but lacked a voca­bu­lary for discus­sing that diffe­rence lear­ned the right words from these low-cost means of finding people and lear­ning about their ideas.

2. Decep­tion

Lies and fraud are perni­ci­ous, and survei­llance capi­ta­lism super­char­ges them through targe­ting. If you want to sell a frau­du­lent payday loan or subprime mort­gage, survei­llance capi­ta­lism can help you find people who are both despe­rate and unsop­his­ti­ca­ted and thus recep­tive to your pitch. This accounts for the rise of many pheno­mena, like multi­le­vel marke­ting sche­mes, in which decep­tive claims about poten­tial earnings and the effi­cacy of sales tech­ni­ques are targe­ted at despe­rate people by adver­ti­sing against search queries that indi­cate, for exam­ple, some­one strug­gling with ill-advi­sed loans.

Survei­llance capi­ta­lism also abets fraud by making it easy to locate other people who have been simi­larly decei­ved, forming a commu­nity of people who rein­force one anot­her’s false beli­efs. Think of the forumswhere people who are being victi­mi­zed by multi­le­vel marke­ting frauds gather to trade tips on how to improve their luck in pedd­ling the product.

Some­ti­mes, online decep­tion invol­ves repla­cing some­o­ne’s correct beli­efs with incor­rect ones, as it does in the anti-vacci­na­tion move­ment, whose victims are often people who start out beli­e­ving in vacci­nes but are convin­ced by seemingly plau­si­ble evidence that leads them into the false belief that vacci­nes are harm­ful.

But it’s much more common for fraud to succeed when it doesn’t have to displace a true belief. When my daugh­ter contrac­ted head lice at daycare, one of the daycare workers told me I could get rid of them by trea­ting her hair and scalp with olive oil. I didn’t know anyt­hing about head lice, and I assu­med that the daycare worker did, so I tried it (it didn’t work, and it doesn’t work). It’s easy to end up with false beli­efs when you simply don’t know any better and when those beli­efs are conveyed by some­one who seems to know what they’re doing.

This is perni­ci­ous and diffi­cult — and it’s also the kind of thing the inter­net can help guard against by making true infor­ma­tion avai­la­ble, espe­ci­ally in a form that expo­ses the underlying deli­be­ra­ti­ons among parties with sharply diver­gent views, such as Wiki­pe­dia. But it’s not brain­was­hing; it’s fraud. In the majo­rity of cases, the victims of these fraud campaigns have an infor­ma­ti­o­nal void filled in the custo­mary way, by consul­ting a seemingly reli­a­ble source. If I look up the length of the Brooklyn Bridge and learn that it is 5,800 feet long, but in reality, it is 5,989 feet long, the underlying decep­tion is a problem, but it’s a problem with a simple remedy. It’s a very diffe­rent problem from the anti-vax issue in which some­o­ne’s true belief is displa­ced by a false one by means of sophis­ti­ca­ted persu­a­sion.

3. Domi­na­tion

Survei­llance capi­ta­lism is the result of mono­poly. Mono­poly is the cause, and survei­llance capi­ta­lism and its nega­tive outco­mes are the effects of mono­poly. I’ll get into this in depth later, but for now, suffice it to say that the tech industry has grown up with a radi­cal theory of anti­trust that has allo­wed compa­nies to grow by merging with their rivals, buying up their nascent compe­ti­tors, and expan­ding to control whole market verti­cals.

One exam­ple of how mono­po­lism aids in persu­a­sion is through domi­nance: Google makes edito­rial deci­si­ons about its algo­rithms that deter­mine the sort order of the respon­ses to our queries. If a cabal of frauds­ters have set out to trick the world into thin­king that the Brooklyn Bridge is 5,800 feet long, and if Google gives a high search rank to this group in response to queries like “How long is the Brooklyn Bridge?” then the first eight or 10 scre­ens’ worth of Google results could be wrong. And since most people don’t go beyond the first couple of results — let alone the first page of results — Google’s choice means that many people will be decei­ved.

Google’s domi­nance over search — more than 86% of web sear­ches are perfor­med through Google — means that the way it orders its search results has an outsi­zed effect on public beli­efs. Ironi­cally, Google claims this is why it can’t afford to have any trans­pa­rency in its algo­rithm design: Google’s search domi­nance makes the results of its sorting too impor­tant to risk telling the world how it arri­ves at those results lest some bad actor disco­ver a flaw in the ranking system and exploit it to push its point of view to the top of the search results. There’s an obvi­ous remedy to a company that is too big to audit: break it up into smaller pieces.

Zuboff calls survei­llance capi­ta­lism a “rogue capi­ta­lism” whose data-hoar­ding and machine-lear­ning tech­ni­ques rob us of our free will. But influ­ence campaigns that seek to displace exis­ting, correct beli­efs with false ones have an effect that is small and tempo­rary while mono­po­lis­tic domi­nance over infor­ma­ti­o­nal systems has massive, endu­ring effects. Contro­lling the results to the world’s search queries means contro­lling access both to argu­ments and their rebut­tals and, thus, control over much of the world’s beli­efs. If our concern is how corpo­ra­ti­ons are fore­clo­sing on our ability to make up our own minds and deter­mine our own futu­res, the impact of domi­nance far exce­eds the impact of mani­pu­la­tion and should be central to our analy­sis and any reme­dies we seek.

4. Bypas­sing our rati­o­nal facul­ties

This is the good stuff: using machine lear­ning, “dark patterns, ” enga­ge­ment hacking, and other tech­ni­ques to get us to do things that run coun­ter to our better judg­ment. This is mind control.

Some of these tech­ni­ques have proven devas­ta­tingly effec­tive (if only in the short term). The use of count­down timers on a purchase comple­tion page can create a sense of urgency that causes you to ignore the nagging inter­nal voice sugges­ting that you should shop around or sleep on your deci­sion. The use of people from your social graph in ads can provide “social proof” that a purchase is worth making. Even the auction system pione­e­red by eBay is calcu­la­ted to play on our cogni­tive blind spots, letting us feel like we “own” somet­hing because we bid on it, thus encou­ra­ging us to bid again when we are outbid to ensure that “our” things stay ours.

Games are extra­or­di­na­rily good at this. “Free to play” games mani­pu­late us through many tech­ni­ques, such as presen­ting players with a series of smoothly esca­la­ting challen­ges that create a sense of mastery and accom­plish­ment but which sharply tran­si­tion into a set of challen­ges that are impos­si­ble to over­come without paid upgra­des. Add some social proof to the mix — a stream of noti­fi­ca­ti­ons about how well your friends are faring — and before you know it, you’re buying virtual power-ups to get to the next level.

Compa­nies have risen and fallen on these tech­ni­ques, and the “fallen” part is worth paying atten­tion to. In gene­ral, living things adapt to stimu­lus: Somet­hing that is very compe­lling or note­wor­thy when you first encoun­ter it fades with repe­ti­tion until you stop noti­cing it alto­get­her. Consi­der the refri­ge­ra­tor hum that irri­ta­tes you when it starts up but disap­pe­ars into the back­ground so thoroughly that you only notice it when it stops again.

That’s why beha­vi­o­ral condi­ti­o­ning uses “inter­mit­tent rein­for­ce­ment sche­du­les.” Instead of giving you a steady drip of encou­ra­ge­ment or setbacks, games and gami­fied servi­ces scat­ter rewards on a rando­mi­zed sche­dule — often enough to keep you inter­es­ted and random enough that you can never quite find the pattern that would make it boring.

Inter­mit­tent rein­for­ce­ment is a power­ful beha­vi­o­ral tool, but it also repre­sents a collec­tive action problem for survei­llance capi­ta­lism. The “enga­ge­ment tech­ni­ques” inven­ted by the beha­vi­o­rists of survei­llance capi­ta­list compa­nies are quickly copied across the whole sector so that what starts as a myste­ri­ously compe­lling fillip in the design of a servi­ce—­like “pull to refresh” or alerts when some­one likes your posts or side quests that your charac­ters get invi­ted to while in the midst of main quests—­quickly beco­mes dully ubiqui­tous. The impos­si­ble-to-nail-down nonpat­tern of rando­mi­zed drips from your phone beco­mes a grey-noise wall of sound as every single app and site starts to make use of whate­ver seems to be working at the time.

From the survei­llance capi­ta­list’s point of view, our adap­tive capa­city is like a harm­ful bacte­rium that depri­ves it of its food source — our atten­tion — and novel tech­ni­ques for snag­ging that atten­tion are like new anti­bi­o­tics that can be used to breach our defen­ses and destroy our self-deter­mi­na­tion. And there are tech­ni­ques like that. Who can forget the Great Zynga Epide­mic, when all of our friends were caught in Farm­Vi­lle’s endless, mind­less dopa­mine loops? But every new atten­tion-comman­ding tech­ni­que is jumped on by the whole industry and used so indis­cri­mi­na­tely that anti­bi­o­tic resis­tance sets in. Given enough repe­ti­tion, almost all of us deve­lop immu­nity to even the most power­ful tech­ni­ques — by 2013, two years after Zynga’s peak, its user base had halved.

Not everyone, of course. Some people never adapt to stimu­lus, just as some people never stop hearing the hum of the refri­ge­ra­tor. This is why most people who are expo­sed to slot machi­nes play them for a while and then move on while a small and tragic mino­rity liqui­date their kids’ college funds, buy adult diapers, and posi­tion them­sel­ves in front of a machine until they collapse.

But survei­llance capi­ta­lism’s margins on beha­vi­o­ral modi­fi­ca­tion suck. Tripling the rate at which some­one buys a widget sounds great unless the base rate is way less than 1% with an impro­ved rate of… still less than 1%. Even penny slot machi­nes pull down pennies for every spin while survei­llance capi­ta­lism rakes in infi­ni­te­si­mal penny frac­ti­ons.

Slot machi­nes’ high returns mean that they can be profi­ta­ble just by drai­ning the fortu­nes of the small rump of people who are patho­lo­gi­cally vulne­ra­ble to them and unable to adapt to their tricks. But survei­llance capi­ta­lism can’t survive on the frac­ti­o­nal pennies it brings down from that vulne­ra­ble sliver — that’s why, after the Great Zynga Epide­mic had finally burned itself out, the small number of still-addic­ted players left behind couldn’t sustain it as a global pheno­me­non. And new power­ful atten­tion weapons aren’t easy to find, as is eviden­ced by the long years since the last time Zynga had a hit. Despite the hundreds of milli­ons of dollars that Zynga has to spend on deve­lo­ping new tools to blast through our adap­ta­tion, it has never mana­ged to repeat the lucky acci­dent that let it snag so much of our atten­tion for a brief moment in 2009. Power­hou­ses like Super­cell have fared a little better, but they are rare and throw away many failu­res for every success.

The vulne­ra­bi­lity of small segments of the popu­la­tion to drama­tic, effi­ci­ent corpo­rate mani­pu­la­tion is a real concern that’s worthy of our atten­tion and energy. But it’s not an exis­ten­tial threat to soci­ety.

Image for post

Image for post

If data is the new oil, then survei­llance capi­ta­lism’s engine has a leak

This adap­ta­tion problem offers an expla­na­tion for one of survei­llance capi­ta­lism’s most alar­ming traits: its relent­less hunger for data and its endless expan­sion of data-gathe­ring capa­bi­li­ties through the spread of sensors, online survei­llance, and acqui­si­tion of data stre­ams from third parties.

Zuboff obser­ves this pheno­me­non and conclu­des that data must be very valu­a­ble if survei­llance capi­ta­lism is so hungry for it. (In her words: “Just as indus­trial capi­ta­lism was driven to the conti­nu­ous inten­si­fi­ca­tion of the means of produc­tion, so survei­llance capi­ta­lists and their market players are now locked into the conti­nu­ous inten­si­fi­ca­tion of the means of beha­vi­o­ral modi­fi­ca­tion and the gathe­ring might of instru­men­ta­rian power.”) But what if the vora­ci­ous appe­tite is because data has such a short half-life — because people become inured so quickly to new, data-driven persu­a­sion tech­ni­ques — that the compa­nies are locked in an arms race with our limbic system? What if it’s all a Red Queen’s race where they have to run ever faster — collect ever-more data — just to stay in the same spot?

Of course, all of Big Tech’s persu­a­sion tech­ni­ques work in concert with one anot­her, and collec­ting data is useful beyond mere beha­vi­o­ral tric­kery.

If some­one wants to recruit you to buy a refri­ge­ra­tor or join a pogrom, they might use profi­ling and targe­ting to send messa­ges to people they judge to be good sales pros­pects. The messa­ges them­sel­ves may be decep­tive, making claims about things you’re not very know­led­ge­a­ble about (food safety and energy effi­ci­ency or euge­nics and histo­ri­cal claims about racial supe­ri­o­rity). They might use search engine opti­mi­za­tion and/or armies of fake revi­e­wers and commen­ters and/or paid place­ment to domi­nate the discourse so that any search for further infor­ma­tion takes you back to their messa­ges. And finally, they may refine the diffe­rent pitches using machine lear­ning and other tech­ni­ques to figure out what kind of pitch works best on some­one like you.

Each phase of this process bene­fits from survei­llance: The more data they have, the more preci­sely they can profile you and target you with speci­fic messa­ges. Think of how you’d sell a fridge if you knew that the warranty on your pros­pect’s fridge just expi­red and that they were expec­ting a tax rebate in April.

Also, the more data they have, the better they can craft decep­tive messa­ges — if I know that you’re into gene­a­logy, I might not try to feed you pseu­dos­ci­ence about gene­tic diffe­ren­ces between “races, ” stic­king instead to cons­pi­ra­to­rial secret histo­ries of “demo­grap­hic repla­ce­ment” and the like.

Face­book also helps you locate people who have the same odious or anti­so­cial views as you. It makes it possi­ble to find other people who want to carry tiki torches through the stre­ets of Char­lot­tes­vi­lle in Confe­de­rate cosplay. It can help you find other people who want to join your mili­tia and go to the border to look for undo­cu­men­ted migrants to terro­rize. It can help you find people who share your belief that vacci­nes are poison and that the Earth is flat.

There is one way in which targe­ted adver­ti­sing uniquely bene­fits those advo­ca­ting for soci­ally unac­cep­ta­ble causes: It is invi­si­ble. Racism is widely geograp­hi­cally disper­sed, and there are few places where racists — and only racists — gather. This is simi­lar to the problem of selling refri­ge­ra­tors in that poten­tial refri­ge­ra­tor purcha­sers are geograp­hi­cally disper­sed and there are few places where you can buy an ad that will be prima­rily seen by refri­ge­ra­tor custo­mers. But buying a refri­ge­ra­tor is soci­ally accep­ta­ble while being a Nazi is not, so you can buy a bill­bo­ard or adver­tise in the news­pa­per sports section for your refri­ge­ra­tor busi­ness, and the only poten­tial down­side is that your ad will be seen by a lot of people who don’t want refri­ge­ra­tors, resul­ting in a lot of wasted expense.

But even if you wanted to adver­tise your Nazi move­ment on a bill­bo­ard or prime-time TV or the sports section, you would strug­gle to find anyone willing to sell you the space for your ad partly because they disa­gree with your views and partly because they fear censure (boycott, repu­ta­ti­o­nal damage, etc.) from other people who disa­gree with your views.

Targe­ted ads solve this problem: On the inter­net, every ad unit can be diffe­rent for every person, meaning that you can buy ads that are only shown to people who appear to be Nazis and not to people who hate Nazis. When there’s spillo­ver — when some­one who hates racism is shown a racist recrui­ting ad — there is some fallout; the plat­form or publi­ca­tion might get an angry public or private denun­ci­a­tion. But the nature of the risk assu­med by an online ad buyer is diffe­rent than the risks to a tradi­ti­o­nal publis­her or bill­bo­ard owner who might want to run a Nazi ad.

Online ads are placed by algo­rithms that broker between a diverse ecosys­tem of self-serve ad plat­forms that anyone can buy an ad through, so the Nazi ad that slips onto your favo­rite online publi­ca­tion isn’t seen as their moral failing but rather as a failure in some distant, upstream ad supplier. When a publi­ca­tion gets a compla­int about an offen­sive ad that’s appe­a­ring in one of its units, it can take some steps to block that ad, but the Nazi might buy a slightly diffe­rent ad from a diffe­rent broker serving the same unit. And in any event, inter­net users incre­a­singly unders­tand that when they see an ad, it’s likely that the adver­ti­ser did not choose that publi­ca­tion and that the publi­ca­tion has no idea who its adver­ti­sers are.

These layers of indi­rec­tion between adver­ti­sers and publis­hers serve as moral buffers: Today’s moral consen­sus is largely that publis­hers shouldn’t be held respon­si­ble for the ads that appear on their pages because they’re not acti­vely choo­sing to put those ads there. Because of this, Nazis are able to over­come signi­fi­cant barri­ers to orga­ni­zing their move­ment.

Data has a complex rela­ti­ons­hip with domi­na­tion. Being able to spy on your custo­mers can alert you to their prefe­ren­ces for your rivals and allow you to head off your rivals at the pass.

More impor­tantly, if you can domi­nate the infor­ma­tion space while also gathe­ring data, then you make other decep­tive tactics stron­ger because it’s harder to break out of the web of deceit you’re spin­ning. Domi­na­tion — that is, ulti­ma­tely beco­ming a mono­poly — and not the data itself is the super­char­ger that makes every tactic worth pursuing because mono­po­lis­tic domi­na­tion depri­ves your target of an escape route.

If you’re a Nazi who wants to ensure that your pros­pects prima­rily see decep­tive, confir­ming infor­ma­tion when they search for more, you can improve your odds by seeding the search terms they use through your initial commu­ni­ca­ti­ons. You don’t need to own the top 10 results for “voter suppres­sion” if you can convince your marks to confine their search terms to “voter fraud, ” which throws up a very diffe­rent set of search results.

Survei­llance capi­ta­lists are like stage menta­lists who claim that their extra­or­di­nary insights into human beha­vior let them guess the word that you wrote down and folded up in your pocket but who really use shills, hidden came­ras, sleight of hand, and brute-force memo­ri­za­tion to amaze you.

Or perhaps they’re more like pick-up artists, the misogy­nis­tic cult that promi­ses to help awkward men have sex with women by teaching them “neuro­lin­guis­tic program­ming” phra­ses, body language tech­ni­ques, and psycho­lo­gi­cal mani­pu­la­tion tactics like “negging” — offe­ring unso­li­ci­ted nega­tive feed­back to women to lower their self-esteem and prick their inter­est.

Some pick-up artists even­tu­ally manage to convince women to go home with them, but it’s not because these men have figu­red out how to bypass women’s criti­cal facul­ties. Rather, pick-up artists’ “success” stories are a mix of women who were inca­pa­ble of giving consent, women who were coer­ced, women who were into­xi­ca­ted, self-destruc­tive women, and a few women who were sober and in command of their facul­ties but who didn’t realize straigh­ta­way that they were with terri­ble men but recti­fied the error as soon as they could.

Pick-up artists beli­eve they have figu­red out a secret back door that bypas­ses women’s criti­cal facul­ties, but they haven’t. Many of the tactics they deploy, like negging, became the butt of jokes (just like people joke about bad ad targe­ting), and there’s a good chance that anyone they try these tactics on will imme­di­a­tely recog­nize them and dismiss the men who use them as irre­de­e­ma­ble losers.

Pick-up artists are proof that people can beli­eve they have deve­lo­ped a system of mind control even when it doesn’t work. Pick-up artists simply exploit the fact that one-in-a-million chan­ces can come through for you if you make a million attempts, and then they assume that the other 999,999 times, they simply perfor­med the tech­ni­que incor­rectly and commit them­sel­ves to doing better next time. There’s only one group of people who find pick-up artist lore reli­ably convin­cing: other would-be pick-up artists whose anxi­ety and inse­cu­rity make them vulne­ra­ble to scam­mers and delu­si­o­nal men who convince them that if they pay for tute­lage and follow instruc­ti­ons, then they will some­day succeed. Pick-up artists assume they fail to entice women because they are bad at being pick-up artists, not because pick-up artistry is bulls­hit. Pick-up artists are bad at selling them­sel­ves to women, but they’re much better at selling them­sel­ves to men who pay to learn the secrets of pick-up artistry.

Depart­ment store pioneer John Wana­ma­ker is said to have lamen­ted, “Half the money I spend on adver­ti­sing is wasted; the trou­ble is I don’t know which half.” The fact that Wana­ma­ker thought that only half of his adver­ti­sing spen­ding was wasted is a tribute to the persu­a­si­ve­ness of adver­ti­sing execu­ti­ves, who are much better at convin­cing poten­tial clients to buy their servi­ces than they are at convin­cing the gene­ral public to buy their clients’ wares.

Image for post

Image for post

What is Face­book?

Face­book is heral­ded as the origin of all of our modern plagues, and it’s not hard to see why. Some tech compa­nies want to lock their users in but make their money by mono­po­li­zing access to the market for apps for their devi­ces and gouging them on prices rather than by spying on them (like Apple). Some compa­nies don’t care about locking in users because they’ve figu­red out how to spy on them no matter where they are and what they’re doing and can turn that survei­llance into money (Google). Face­book alone among the Western tech giants has built a busi­ness based on locking in its users and spying on them all the time.

Face­book’s survei­llance regime is really without para­llel in the Western world. Though Face­book tries to prevent itself from being visi­ble on the public web, hiding most of what goes on there from people unless they’re logged into Face­book, the company has never­the­less booby-trap­ped the entire web with survei­llance tools in the form of Face­book “Like” buttons that web publis­hers include on their sites to boost their Face­book profi­les. Face­book also makes vari­ous libra­ries and other useful code snip­pets avai­la­ble to web publis­hers that act as survei­llance tendrils on the sites where they’re used, funne­ling infor­ma­tion about visi­tors to the site — news­pa­pers, dating sites, message boards — to Face­book.

Big Tech is able to prac­tice survei­llance not just because it is tech but because it is big.

Face­book offers simi­lar tools to app deve­lo­pers, so the apps — games, fart machi­nes, busi­ness review servi­ces, apps for keeping abre­ast of your kid’s scho­o­ling — you use will send infor­ma­tion about your acti­vi­ties to Face­book even if you don’t have a Face­book account and even if you don’t down­load or use Face­book apps. On top of all that, Face­book buys data from third-party brokers on shop­ping habits, physi­cal loca­tion, use of “loyalty” programs, finan­cial trans­ac­ti­ons, etc., and cross-refe­ren­ces that with the dossi­ers it deve­lops on acti­vity on Face­book and with apps and the public web.

Though it’s easy to inte­grate the web with Face­book — linking to news stories and such — Face­book products are gene­rally not avai­la­ble to be inte­gra­ted back into the web itself. You can embed a tweet in a Face­book post, but if you embed a Face­book post in a tweet, you just get a link back to Face­book and must log in before you can see it. Face­book has used extreme tech­no­lo­gi­cal and legal coun­ter­me­a­su­res to prevent rivals from allo­wing their users to embed Face­book snip­pets in compe­ting servi­ces or to create alter­na­tive inter­fa­ces to Face­book that merge your Face­book inbox with those of other servi­ces that you use.

And Face­book is incre­dibly popu­lar, with 2.3 billion clai­med users (though many beli­eve this figure to be infla­ted). Face­book has been used to orga­nize geno­ci­dal pogroms, racist riots, anti-vacci­na­tion move­ments, flat Earth cults, and the poli­ti­cal lives of some of the world’s ugli­est, most brutal auto­crats. There are some really alar­ming things going on in the world, and Face­book is impli­ca­ted in many of them, so it’s easy to conclude that these bad things are the result of Face­book’s mind-control system, which it rents out to anyone with a few bucks to spend.

To unders­tand what role Face­book plays in the formu­la­tion and mobi­li­za­tion of anti­so­cial move­ments, we need to unders­tand the dual nature of Face­book.

Because it has a lot of users and a lot of data about those users, Face­book is a very effi­ci­ent tool for loca­ting people with hard-to-find traits, the kinds of traits that are widely diffu­sed in the popu­la­tion such that adver­ti­sers have histo­ri­cally strug­gled to find a cost-effec­tive way to reach them. Think back to refri­ge­ra­tors: Most of us only replace our major appli­an­ces a few times in our entire lives. If you’re a refri­ge­ra­tor manu­fac­tu­rer or retai­ler, you have these brief windows in the life of a consu­mer during which they are ponde­ring a purchase, and you have to some­how reach them. Anyone who’s ever regis­te­red a title change after buying a house can attest that appli­ance manu­fac­tu­rers are incre­dibly despe­rate to reach anyone who has even the slen­de­rest chance of being in the market for a new fridge.

Face­book makes finding people shop­ping for refri­ge­ra­tors a lot easier. It can target ads to people who’ve regis­te­red a new home purchase, to people who’ve sear­ched for refri­ge­ra­tor buying advice, to people who have complai­ned about their fridge dying, or any combi­na­tion thereof. It can even target people who’ve recently bought other kitchen appli­an­ces on the theory that some­one who’s just repla­ced their stove and dish­was­her might be in a fridge-buying kind of mood. The vast majo­rity of people who are reached by these ads will not be in the market for a new fridge, but — cruci­ally — the percen­tage of people who are looking for frid­ges that these ads reach is much larger than it is than for any group that might be subjec­ted to tradi­ti­o­nal, offline targe­ted refri­ge­ra­tor marke­ting.

Face­book also makes it a lot easier to find people who have the same rare dise­ase as you, which might have been impos­si­ble in earlier eras — the closest fellow suffe­rer might other­wise be hundreds of miles away. It makes it easier to find people who went to the same high school as you even though deca­des have passed and your former class­ma­tes have all been scat­te­red to the four corners of the Earth.

Face­book also makes it much easier to find people who hold the same rare poli­ti­cal beli­efs as you. If you’ve always harbo­red a secret affi­nity for soci­a­lism but never dared utter this aloud lest you be demo­ni­zed by your neigh­bors, Face­book can help you disco­ver other people who feel the same way (and it might just demons­trate to you that your affi­nity is more wides­pread than you ever suspec­ted). It can make it easier to find people who share your sexual iden­tity. And again, it can help you to unders­tand that what you thought was a shame­ful secret that affec­ted only you was really a widely shared trait, giving you both comfort and the courage to come out to the people in your life.

All of this presents a dilemma for Face­book: Targe­ting makes the company’s ads more effec­tive than tradi­ti­o­nal ads, but it also lets adver­ti­sers see just how effec­tive their ads are. While adver­ti­sers are plea­sed to learn that Face­book ads are more effec­tive than ads on systems with less sophis­ti­ca­ted targe­ting, adver­ti­sers can also see that in nearly every case, the people who see their ads ignore them. Or, at best, the ads work on a subcons­ci­ous level, crea­ting nebu­lous unme­a­su­ra­bles like “brand recog­ni­tion.” This means that the price per ad is very low in nearly every case.

To make things worse, many Face­book groups spark preci­ous little discus­sion. Your little-league soccer team, the people with the same rare dise­ase as you, and the people you share a poli­ti­cal affi­nity with may exchange the odd flurry of messa­ges at criti­cal junc­tu­res, but on a daily basis, there’s not much to say to your old high school chums or other hockey-card collec­tors.

With nothing but “orga­nic” discus­sion, Face­book would not gene­rate enough traf­fic to sell enough ads to make the money it needs to conti­nu­ally expand by buying up its compe­ti­tors while retur­ning hand­some sums to its inves­tors.

So Face­book has to gin up traf­fic by side­trac­king its own forums: Every time Face­book’s algo­rithm injects contro­ver­sial mate­ri­als — inflam­ma­tory poli­ti­cal arti­cles, cons­pi­racy theo­ries, outrage stories — into a group, it can hijack that group’s nomi­nal purpose with its desul­tory discus­si­ons and super­charge those discus­si­ons by turning them into bitter, unpro­duc­tive argu­ments that drag on and on. Face­book is opti­mi­zed for enga­ge­ment, not happi­ness, and it turns out that auto­ma­ted systems are pretty good at figu­ring out things that people will get angry about.

Face­book can modify our beha­vior but only in a couple of trivial ways. First, it can lock in all your friends and family members so that you check and check and check with Face­book to find out what they are up to; and second, it can make you angry and anxi­ous. It can force you to choose between being inter­rup­ted cons­tantly by upda­tes — a process that breaks your concen­tra­tion and makes it hard to be intros­pec­tive — and staying in touch with your friends. This is a very limi­ted form of mind control, and it can only really make us mise­ra­ble, angry, and anxi­ous.

This is why Face­book’s targe­ting systems — both the ones it shows to adver­ti­sers and the ones that let users find people who share their inter­ests — are so next-gen and smooth and easy to use as well as why its message boards have a tool­set that seems like it hasn’t chan­ged since the mid-2000s. If Face­book deli­ve­red an equally flexi­ble, sophis­ti­ca­ted message-reading system to its users, those users could defend them­sel­ves against being noncon­sen­su­ally eyeball-fucked with Donald Trump head­li­nes.

The more time you spend on Face­book, the more ads it gets to show you. The solu­tion to Face­book’s ads only working one in a thou­sand times is for the company to try to incre­ase how much time you spend on Face­book by a factor of a thou­sand. Rather than thin­king of Face­book as a company that has figu­red out how to show you exac­tly the right ad in exac­tly the right way to get you to do what its adver­ti­sers want, think of it as a company that has figu­red out how to make you slog through an endless torrent of argu­ments even though they make you mise­ra­ble, spen­ding so much time on the site that it even­tu­ally shows you at least one ad that you respond to.

Image for post

Image for post

Mono­poly and the right to the future tense

Zuboff and her cohort are parti­cu­larly alar­med at the extent to which survei­llance allows corpo­ra­ti­ons to influ­ence our deci­si­ons, taking away somet­hing she poeti­cally calls “the right to the future tense” — that is, the right to decide for your­self what you will do in the future.

It’s true that adver­ti­sing can tip the scales one way or anot­her: When you’re thin­king of buying a fridge, a timely fridge ad might end the search on the spot. But Zuboff puts enor­mous and undue weight on the persu­a­sive power of survei­llance-based influ­ence tech­ni­ques. Most of these don’t work very well, and the ones that do won’t work for very long. The makers of these influ­ence tools are confi­dent they will some­day refine them into systems of total control, but they are hardly unbi­a­sed obser­vers, and the risks from their dreams coming true are very specu­la­tive.

By contrast, Zuboff is rather sanguine about 40 years of lax anti­trust prac­tice that has allo­wed a hand­ful of compa­nies to domi­nate the inter­net, ushe­ring in an infor­ma­tion age with, as one person on Twit­ter noted, five giant websi­tes each filled with scre­ens­hots of the other four.

Howe­ver, if we are to be alar­med that we might lose the right to choose for oursel­ves what our future will hold, then mono­poly’s nons­pe­cu­la­tive, concrete, here-and-now harms should be front and center in our debate over tech policy.

Start with “digi­tal rights manage­ment.” In 1998, Bill Clin­ton signed the Digi­tal Millen­nium Copy­right Act (DMCA) into law. It’s a complex piece of legis­la­tion with many contro­ver­sial clau­ses but none more so than Section 1201, the “anti-circum­ven­tion” rule.

This is a blan­ket ban on tampe­ring with systems that restrict access to copy­righ­ted works. The ban is so thorough­going that it prohi­bits remo­ving a copy­right lock even when no copy­right infrin­ge­ment takes place. This is by design: The acti­vi­ties that the DMCA’s Section 1201 sets out to ban are not copy­right infrin­ge­ments; rather, they are legal acti­vi­ties that frus­trate manu­fac­tu­rers’ commer­cial plans.

For exam­ple, Section 1201’s first major appli­ca­tion was on DVD players as a means of enfor­cing the region coding built into those devi­ces. DVD-CCA, the body that stan­dar­di­zed DVDs and DVD players, divi­ded the world into six regi­ons and speci­fied that DVD players must check each disc to deter­mine which regi­ons it was autho­ri­zed to be played in. DVD players would have their own corres­pon­ding region (a DVD player bought in the U.S. would be region 1 while one bought in India would be region 5). If the player and the disc’s region matched, the player would play the disc; other­wise, it would reject it.

Howe­ver, watching a lawfully produ­ced disc in a country other than the one where you purcha­sed it is not copy­right infrin­ge­ment — it’s the oppo­site. Copy­right law impo­ses this duty on custo­mers for a movie: You must go into a store, find a licen­sed disc, and pay the asking price. Do that — and nothing else — and you and copy­right are square with one anot­her.

The fact that a movie studio wants to charge Indi­ans less than Ameri­cans or rele­ase in Austra­lia later than it rele­a­ses in the U.K. has no bearing on copy­right law. Once you lawfully acquire a DVD, it is no copy­right infrin­ge­ment to watch it no matter where you happen to be.

So DVD and DVD player manu­fac­tu­rers would not be able to use accu­sa­ti­ons of abet­ting copy­right infrin­ge­ment to punish manu­fac­tu­rers who made noncom­pli­ant players that would play discs from any region or repair shops that modi­fied players to let you watch out-of-region discs or soft­ware program­mers who crea­ted programs to let you do this.

That’s where Section 1201 of the DMCA comes in: By banning tampe­ring with an “access control, ” the rule gave manu­fac­tu­rers and rights holders stan­ding to sue compe­ti­tors who rele­a­sed supe­rior products with lawful featu­res that the market deman­ded (in this case, region-free players).

This is an odious scam against consu­mers, but as time went by, Section 1201 grew to encom­pass a rapidly expan­ding cons­te­lla­tion of devi­ces and servi­ces as canny manu­fac­tu­rers have reali­zed certain things:

  • Any device with soft­ware in it contains a “copy­righ­ted work” — i.e., the soft­ware.
  • A device can be desig­ned so that recon­fi­gu­ring the soft­ware requi­res bypas­sing an “access control for copy­righ­ted works, ” which is a poten­tial felony under Section 1201.
  • Thus, compa­nies can control their custo­mers’ beha­vior after they take home their purcha­ses by desig­ning products so that all unper­mit­ted uses require modi­fi­ca­ti­ons that fall afoul of Section 1201.

Section 1201 then beco­mes a means for manu­fac­tu­rers of all descrip­ti­ons to force their custo­mers to arrange their affairs to bene­fit the manu­fac­tu­rers’ share­hol­ders instead of them­sel­ves.

This mani­fests in many ways: from a new gene­ra­tion of inkjet prin­ters that use coun­ter­me­a­su­res to prevent third-party ink that cannot be bypas­sed without legal risks to simi­lar systems in trac­tors that prevent third-party tech­ni­ci­ans from swap­ping in the manu­fac­tu­rer’s own parts that are not recog­ni­zed by the trac­tor’s control system until it is supplied with a manu­fac­tu­rer’s unlock code.

Closer to home, Apple’s iPho­nes use these measu­res to prevent both third-party service and third-party soft­ware insta­lla­tion. This allows Apple to decide when an iPhone is beyond repair and must be shred­ded and land­fi­lled as oppo­sed to the iPho­ne’s purcha­ser. (Apple is noto­ri­ous for its envi­ron­men­tally catas­trop­hic policy of destroying old elec­tro­nics rather than permit­ting them to be canni­ba­li­zed for parts.) This is a very useful power to wield, espe­ci­ally in light of CEO Tim Cook’s Janu­ary 2019 warning to inves­tors that the company’s profits are endan­ge­red by custo­mers choo­sing to hold onto their phones for longer rather than repla­cing them.

Apple’s use of copy­right locks also allows it to esta­blish a mono­poly over how its custo­mers acquire soft­ware for their mobile devi­ces. The App Store’s commer­cial terms guaran­tee Apple a share of all reve­nues gene­ra­ted by the apps sold there, meaning that Apple gets paid when you buy an app from its store and then conti­nues to get paid every time you buy somet­hing using that app. This comes out of the bottom line of soft­ware deve­lo­pers, who must either charge more or accept lower profits for their products.

Cruci­ally, Apple’s use of copy­right locks gives it the power to make edito­rial deci­si­ons about which apps you may and may not install on your own device. Apple has used this power to reject dicti­o­na­ries for contai­ning obscene words; to limit poli­ti­cal speech, espe­ci­ally from apps that make sensi­tive poli­ti­cal commen­tary such as an app that noti­fies you every time a U.S. drone kills some­one somew­here in the world; and to object to a game that commen­ted on the Israel-Pales­tine conflict.

Apple often justi­fies mono­poly power over soft­ware insta­lla­tion in the name of secu­rity, arguing that its vetting of apps for its store means that it can guard its users against apps that contain survei­llance code. But this cuts both ways. In China, the govern­ment orde­red Apple to prohi­bit the sale of privacy tools like VPNs with the excep­tion of VPNs that had deli­be­ra­tely intro­du­ced flaws desig­ned to let the Chinese state eaves­drop on users. Because Apple uses tech­no­lo­gi­cal coun­ter­me­a­su­res — with legal backs­tops — to block custo­mers from insta­lling unaut­ho­ri­zed apps, Chinese iPhone owners cannot readily (or legally) acquire VPNs that would protect them from Chinese state snoo­ping.

Zuboff calls survei­llance capi­ta­lism a “rogue capi­ta­lism.” Theo­re­ti­ci­ans of capi­ta­lism claim that its virtue is that it aggre­ga­tes infor­ma­tion in the form of consu­mers’ deci­si­ons, produ­cing effi­ci­ent markets. Survei­llance capi­ta­lism’s suppo­sed power to rob its victims of their free will through compu­ta­ti­o­nally super­char­ged influ­ence campaigns means that our markets no longer aggre­gate custo­mers’ deci­si­ons because we custo­mers no longer decide — we are given orders by survei­llance capi­ta­lism’s mind-control rays.

If our concern is that markets cease to func­tion when consu­mers can no longer make choi­ces, then copy­right locks should concern us at least as much as influ­ence campaigns. An influ­ence campaign might nudge you to buy a certain brand of phone; but the copy­right locks on that phone abso­lu­tely deter­mine where you get it servi­ced, which apps can run on it, and when you have to throw it away rather than fixing it.

Search order and the right to the future tense

Markets are posed as a kind of magic: By disco­ve­ring other­wise hidden infor­ma­tion conveyed by the free choi­ces of consu­mers, those consu­mers’ local know­ledge is inte­gra­ted into a self-correc­ting system that makes effi­ci­ent allo­ca­ti­ons—­more effi­ci­ent than any compu­ter could calcu­late. But mono­po­lies are incom­pa­ti­ble with that notion. When you only have one app store, the owner of the store — not the consu­mer — deci­des on the range of choi­ces. As Boss Tweed once said, “I don’t care who does the elec­ting, so long as I get to do the nomi­na­ting.” A mono­po­li­zed market is an elec­tion whose candi­da­tes are chosen by the mono­po­list.

This ballot rigging is made more perni­ci­ous by the exis­tence of mono­po­lies over search order. Google’s search market share is about 90%. When Google’s ranking algo­rithm puts a result for a popu­lar search term in its top 10, that helps deter­mine the beha­vior of milli­ons of people. If Google’s answer to “Are vacci­nes dange­rous?” is a page that rebuts anti-vax cons­pi­racy theo­ries, then a siza­ble portion of the public will learn that vacci­nes are safe. If, on the other hand, Google sends those people to a site affir­ming the anti-vax cons­pi­ra­cies, a siza­ble portion of those milli­ons will come away convin­ced that vacci­nes are dange­rous.

Google’s algo­rithm is often tric­ked into serving disin­for­ma­tion as a promi­nent search result. But in these cases, Google isn’t persu­a­ding people to change their minds; it’s just presen­ting somet­hing untrue as fact when the user has no cause to doubt it.

This is true whet­her the search is for “Are vacci­nes dange­rous?” or “best restau­rants near me.” Most users will never look past the first page of search results, and when the overw­hel­ming majo­rity of people all use the same search engine, the ranking algo­rithm deployed by that search engine will deter­mine myriad outco­mes (whet­her to adopt a child, whet­her to have cancer surgery, where to eat dinner, where to move, where to apply for a job) to a degree that vastly outs­trips any beha­vi­o­ral outco­mes dicta­ted by algo­rith­mic persu­a­sion tech­ni­ques.

Many of the ques­ti­ons we ask search engi­nes have no empi­ri­cally correct answers: “Where should I eat dinner?” is not an objec­tive ques­tion. Even ques­ti­ons that do have correct answers (“Are vacci­nes dange­rous?”) don’t have one empi­ri­cally supe­rior source for that answer. Many pages affirm the safety of vacci­nes, so which one goes first? Under condi­ti­ons of compe­ti­tion, consu­mers can choose from many search engi­nes and stick with the one whose algo­rith­mic judg­ment suits them best, but under condi­ti­ons of mono­poly, we all get our answers from the same place.

Google’s search domi­nance isn’t a matter of pure merit: The company has leve­ra­ged many tactics that would have been prohi­bi­ted under clas­si­cal, pre-Ronald-Reagan anti­trust enfor­ce­ment stan­dards to attain its domi­nance. After all, this is a company that has deve­lo­ped two major products: a really good search engine and a pretty good Hotmail clone. Every other major success it’s had — Android, YouTube, Google Maps, etc. — has come through an acqui­si­tion of a nascent compe­ti­tor. Many of the company’s key divi­si­ons, such as the adver­ti­sing tech­no­logy of Double­Click, violate the histo­ri­cal anti­trust prin­ci­ple of struc­tu­ral sepa­ra­tion, which forbade firms from owning subsi­di­a­ries that compe­ted with their custo­mers. Rail­ro­ads, for exam­ple, were barred from owning freight compa­nies that compe­ted with the ship­pers whose freight they carried.

If we’re worried about giant compa­nies subver­ting markets by strip­ping consu­mers of their ability to make free choi­ces, then vigo­rous anti­trust enfor­ce­ment seems like an exce­llent remedy. If we’d denied Google the right to effect its many mergers, we would also have probably denied it its total search domi­nance. Without that domi­nance, the pet theo­ries, biases, errors (and good judg­ment, too) of Google search engi­ne­ers and product mana­gers would not have such an outsi­zed effect on consu­mer choice.

This goes for many other compa­nies. Amazon, a clas­sic survei­llance capi­ta­list, is obvi­ously the domi­nant tool for sear­ching Amazon — though many people find their way to Amazon through Google sear­ches and Face­book posts — and obvi­ously, Amazon controls Amazon search. That means that Amazon’s own self-serving edito­rial choi­ces—­like promo­ting its own house brands over rival goods from its sellers as well as its own pet theo­ries, biases, and errors— deter­mine much of what we buy on Amazon. And since Amazon is the domi­nant e-commerce retai­ler outside of China and since it attai­ned that domi­nance by buying up both large rivals and nascent compe­ti­tors in defi­ance of histo­ri­cal anti­trust rules, we can blame the mono­poly for strip­ping consu­mers of their right to the future tense and the ability to shape markets by making infor­med choi­ces.

Not every mono­po­list is a survei­llance capi­ta­list, but that doesn’t mean they’re not able to shape consu­mer choi­ces in wide-ranging ways. Zuboff lauds Apple for its App Store and iTunes Store, insis­ting that adding price tags to the featu­res on its plat­forms has been the secret to resis­ting survei­llance and thus crea­ting markets. But Apple is the only retai­ler allo­wed to sell on its plat­forms, and it’s the second-largest mobile device vendor in the world. The inde­pen­dent soft­ware vendors that sell through Apple’s market­place accuse the company of the same survei­llance sins as Amazon and other big retai­lers: spying on its custo­mers to find lucra­tive new products to launch, effec­ti­vely using inde­pen­dent soft­ware vendors as free-market rese­ar­chers, then forcing them out of any markets they disco­ver.

Because of its use of copy­right locks, Apple’s mobile custo­mers are not legally allo­wed to switch to a rival retai­ler for its apps if they want to do so on an iPhone. Apple, obvi­ously, is the only entity that gets to decide how it ranks the results of search queries in its stores. These deci­si­ons ensure that some apps are often insta­lled (because they appear on page one) and others are never insta­lled (because they appear on page one million). Apple’s search-ranking design deci­si­ons have a vastly more signi­fi­cant effect on consu­mer beha­vi­ors than influ­ence campaigns deli­ve­red by survei­llance capi­ta­lism’s ad-serving bots.

Image for post

Image for post

Mono­po­lists can afford slee­ping pills for watch­dogs

Only the most extreme market ideo­lo­gues think that markets can self-regu­late without state over­sight. Markets need watch­dogs — regu­la­tors, lawma­kers, and other elements of demo­cra­tic control — to keep them honest. When these watch­dogs sleep on the job, then markets cease to aggre­gate consu­mer choi­ces because those choi­ces are cons­trai­ned by ille­gi­ti­mate and decep­tive acti­vi­ties that compa­nies are able to get away with because no one is holding them to account.

But this kind of regu­la­tory capture doesn’t come cheap. In compe­ti­tive sectors, where rivals are cons­tantly eroding one anot­her’s margins, indi­vi­dual firms lack the surplus capi­tal to effec­ti­vely lobby for laws and regu­la­ti­ons that serve their ends.

Many of the harms of survei­llance capi­ta­lism are the result of weak or none­xis­tent regu­la­tion. Those regu­la­tory vacuums spring from the power of mono­po­lists to resist stron­ger regu­la­tion and to tailor what regu­la­tion exists to permit their exis­ting busi­nes­ses.

Here’s an exam­ple: When firms over-collect and over-retain our data, they are at incre­a­sed risk of suffe­ring a breach — you can’t leak data you never collec­ted, and once you delete all copies of that data, you can no longer leak it. For more than a decade, we’ve lived through an endless parade of ever-worse­ning data brea­ches, each one uniquely horri­ble in the scale of data brea­ched and the sensi­ti­vity of that data.

But still, firms conti­nue to over-collect and over-retain our data for three reasons:

1. They are locked in the afore­men­ti­o­ned limbic arms race with our capa­city to shore up our atten­ti­o­nal defense systems to resist their new persu­a­sion tech­ni­ques. They’re also locked in an arms race with their compe­ti­tors to find new ways to target people for sales pitches. As soon as they disco­ver a soft spot in our atten­ti­o­nal defen­ses (a coun­te­rin­tui­tive, unob­vi­ous way to target poten­tial refri­ge­ra­tor buyers), the public begins to wise up to the tactic, and their compe­ti­tors leap on it, haste­ning the day in which all poten­tial refri­ge­ra­tor buyers have been inured to the pitch.

2. They beli­eve the survei­llance capi­ta­lism story. Data is cheap to aggre­gate and store, and both propo­nents and oppo­nents of survei­llance capi­ta­lism have assu­red mana­gers and product desig­ners that if you collect enough data, you will be able to perform sorce­rous acts of mind control, thus super­char­ging your sales. Even if you never figure out how to profit from the data, some­one else will even­tu­ally offer to buy it from you to give it a try. This is the hall­mark of all econo­mic bubbles: acqui­ring an asset on the assump­tion that some­one else will buy it from you for more than you paid for it, often to sell to some­one else at an even grea­ter price.

3. The penal­ties for leaking data are negli­gi­ble. Most coun­tries limit these penal­ties to actual dama­ges, meaning that consu­mers who’ve had their data brea­ched have to show actual mone­tary harms to get a reward. In 2014, Home Depot disclo­sed that it had lost credit-card data for 53 million of its custo­mers, but it sett­led the matter by paying those custo­mers about $0.34 each — and a third of that $0.34 wasn’t even paid in cash. It took the form of a credit to procure a largely inef­fec­tual credit-moni­to­ring service.

But the harms from brea­ches are much more exten­sive than these actual-dama­ges rules capture. Iden­tity thie­ves and frauds­ters are wily and endlessly inven­tive. All the vast brea­ches of our century are being conti­nu­ously recom­bi­ned, the data sets merged and mined for new ways to victi­mize the people whose data was present in them. Any reaso­na­ble, evidence-based theory of deter­rence and compen­sa­tion for brea­ches would not confine dama­ges to actual dama­ges but rather would allow users to claim these future harms.

Howe­ver, even the most ambi­ti­ous privacy rules, such as the EU Gene­ral Data Protec­tion Regu­la­tion, fall far short of captu­ring the nega­tive exter­na­li­ties of the plat­forms’ negli­gent over-collec­tion and over-reten­tion, and what penal­ties they do provide are not aggres­si­vely pursued by regu­la­tors.

This tole­rance of — or indif­fe­rence to — data over-collec­tion and over-reten­tion can be ascri­bed in part to the sheer lobbying muscle of the plat­forms. They are so profi­ta­ble that they can handily afford to divert gigan­tic sums to fight any real change — that is, change that would force them to inter­na­lize the costs of their survei­llance acti­vi­ties.

And then there’s state survei­llance, which the survei­llance capi­ta­lism story dismis­ses as a relic of anot­her era when the big worry was being jailed for your dissi­dent speech, not having your free will strip­ped away with machine lear­ning.

But state survei­llance and private survei­llance are inti­ma­tely rela­ted. As we saw when Apple was cons­crip­ted by the Chinese govern­ment as a vital colla­bo­ra­tor in state survei­llance, the only really affor­da­ble and trac­ta­ble way to conduct mass survei­llance on the scale prac­ti­ced by modern states — both “free” and auto­cra­tic states — is to suborn commer­cial servi­ces.

Whet­her it’s Google being used as a loca­tion trac­king tool by local law enfor­ce­ment across the U.S. or the use of social media trac­king by the Depart­ment of Home­land Secu­rity to build dossi­ers on parti­ci­pants in protests against Immi­gra­tion and Customs Enfor­ce­ment’s family sepa­ra­tion prac­ti­ces, any hard limits on survei­llance capi­ta­lism would hams­tring the state’s own survei­llance capa­bi­lity. Without Palan­tir, Amazon, Google, and other major tech contrac­tors, U.S. cops would not be able to spy on Black people, ICE would not be able to manage the caging of chil­dren at the U.S. border, and state welfare systems would not be able to purge their rolls by dres­sing up cruelty as empi­ri­cism and clai­ming that poor and vulne­ra­ble people are ineli­gi­ble for assis­tance. At least some of the states’ unwi­lling­ness to take meaning­ful action to curb survei­llance should be attri­bu­ted to this symbi­o­tic rela­ti­ons­hip. There is no mass state survei­llance without mass commer­cial survei­llance.

Mono­po­lism is key to the project of mass state survei­llance. It’s true that smaller tech firms are apt to be less well-defen­ded than Big Tech, whose secu­rity experts are drawn from the tops of their field and who are given enor­mous resour­ces to secure and moni­tor their systems against intru­ders. But smaller firms also have less to protect: fewer users whose data is more frag­men­ted across more systems and have to be subor­ned one at a time by state actors.

A concen­tra­ted tech sector that works with autho­ri­ties is a much more power­ful ally in the project of mass state survei­llance than a frag­men­ted one compo­sed of smaller actors. The U.S. tech sector is small enough that all of its top execu­ti­ves fit around a single boar­droom table in Trump Tower in 2017, shortly after Trump’s inau­gu­ra­tion. Most of its biggest players bid to win JEDI, the Penta­gon’s $10 billion Joint Enter­prise Defense Infras­truc­ture cloud contract. Like other highly concen­tra­ted indus­tries, Big Tech rota­tes its key employees in and out of govern­ment service, sending them to serve in the Depart­ment of Defense and the White House, then hiring ex-Penta­gon and ex-DOD top staf­fers and offi­cers to work in their own govern­ment rela­ti­ons depart­ments.

They can even make a good case for doing this: After all, when there are only four or five big compa­nies in an industry, everyone quali­fied to regu­late those compa­nies has served as an execu­tive in at least a couple of them — because, like­wise, when there are only five compa­nies in an industry, everyone quali­fied for a senior role at any of them is by defi­ni­tion working at one of the other ones.

While survei­llance doesn’t cause mono­po­lies, mono­po­lies certainly abet survei­llance.

Indus­tries that are compe­ti­tive are frag­men­ted — compo­sed of compa­nies that are at each other’s thro­ats all the time and eroding one anot­her’s margins in bids to steal their best custo­mers. This leaves them with much more limi­ted capi­tal to use to lobby for favo­ra­ble rules and a much harder job of getting everyone to agree to pool their resour­ces to bene­fit the industry as a whole.

Survei­llance combi­ned with machine lear­ning is suppo­sed to be an exis­ten­tial crisis, a species-defi­ning moment at which our free will is just a few more advan­ces in the field from being strip­ped away. I am skep­ti­cal of this claim, but I do think that tech poses an exis­ten­tial threat to our soci­ety and possibly our species.

But that threat grows out of mono­poly.

One of the conse­quen­ces of tech’s regu­la­tory capture is that it can shift liabi­lity for poor secu­rity deci­si­ons onto its custo­mers and the wider soci­ety. It is abso­lu­tely normal in tech for compa­nies to obfus­cate the workings of their products, to make them deli­be­ra­tely hard to unders­tand, and to thre­a­ten secu­rity rese­ar­chers who seek to inde­pen­dently audit those products.

IT is the only field in which this is prac­ti­ced: No one builds a bridge or a hospi­tal and keeps the compo­si­tion of the steel or the equa­ti­ons used to calcu­late load stres­ses a secret. It is a frankly bizarre prac­tice that leads, time and again, to grotes­que secu­rity defects on farci­cal scales, with whole clas­ses of devi­ces being reve­a­led as vulne­ra­ble long after they are deployed in the field and put into sensi­tive places.

The mono­poly power that keeps any meaning­ful conse­quen­ces for brea­ches at bay means that tech compa­nies conti­nue to build terri­ble products that are inse­cure by design and that end up inte­gra­ted into our lives, in posses­sion of our data, and connec­ted to our physi­cal world. For years, Boeing has strug­gled with the after­math of a series of bad tech­no­logy deci­si­ons that made its 737 fleet a global pariah, a rare instance in which bad tech deci­si­ons have been seri­ously punis­hed in the market.

These bad secu­rity deci­si­ons are compoun­ded yet again by the use of copy­right locks to enforce busi­ness-model deci­si­ons against consu­mers. Recall that these locks have become the go-to means for shaping consu­mer beha­vior, making it tech­ni­cally impos­si­ble to use third-party ink, insu­lin, apps, or service depots in connec­tion with your lawfully acqui­red property.

Recall also that these copy­right locks are backs­top­ped by legis­la­tion (such as Section 1201 of the DMCA or Arti­cle 6 of the 2001 EU Copy­right Direc­tive) that ban tampe­ring with (“circum­ven­ting”) them, and these statu­tes have been used to thre­a­ten secu­rity rese­ar­chers who make disclo­su­res about vulne­ra­bi­li­ties without permis­sion from manu­fac­tu­rers.

This amounts to a manu­fac­tu­rer’s veto over safety warnings and criti­cism. While this is far from the legis­la­tive intent of the DMCA and its sister statu­tes around the world, Congress has not inter­ve­ned to clarify the statute nor will it because to do so would run coun­ter to the inter­ests of power­ful, large firms whose lobbying muscle is unstop­pa­ble.

Copy­right locks are a double whammy: They create bad secu­rity deci­si­ons that can’t be freely inves­ti­ga­ted or discus­sed. If markets are suppo­sed to be machi­nes for aggre­ga­ting infor­ma­tion (and if survei­llance capi­ta­lism’s noti­o­nal mind-control rays are what make it a “rogue capi­ta­lism” because it denies consu­mers the power to make deci­si­ons), then a program of legally enfor­ced igno­rance of the risks of products makes mono­po­lism even more of a “rogue capi­ta­lism” than survei­llance capi­ta­lism’s influ­ence campaigns.

And unlike mind-control rays, enfor­ced silence over secu­rity is an imme­di­ate, docu­men­ted problem, and it does cons­ti­tute an exis­ten­tial threat to our civi­li­za­tion and possibly our species. The proli­fe­ra­tion of inse­cure devi­ces — espe­ci­ally devi­ces that spy on us and espe­ci­ally when those devi­ces also can mani­pu­late the physi­cal world by, say, stee­ring your car or flip­ping a brea­ker at a power station — is a kind of tech­no­logy debt.

In soft­ware design, “tech­no­logy debt” refers to old, baked-in deci­si­ons that turn out to be bad ones in hind­sight. Perhaps a long-ago deve­lo­per deci­ded to incor­po­rate a networ­king proto­col made by a vendor that has since stop­ped suppor­ting it. But everyt­hing in the product still relies on that supe­ran­nu­a­ted proto­col, and so, with each revi­sion, the product team has to work around this obso­lete core, adding compa­ti­bi­lity layers, surroun­ding it with secu­rity checks that try to shore up its defen­ses, and so on. These Band-Aid measu­res compound the debt because every subse­quent revi­sion has to make allo­wan­ces for them, too, like inter­est moun­ting on a preda­tory subprime loan. And like a subprime loan, the inter­est mounts faster than you can hope to pay it off: The product team has to put so much energy into main­tai­ning this complex, brittle system that they don’t have any time left over to refac­tor the product from the ground up and “pay off the debt” once and for all.

Typi­cally, tech­no­logy debt results in a tech­no­lo­gi­cal bank­ruptcy: The product gets so brittle and unsus­tai­na­ble that it fails catas­trop­hi­cally. Think of the anti­qua­ted COBOL-based banking and accoun­ting systems that fell over at the start of the pande­mic emer­gency when confron­ted with surges of unem­ploy­ment claims. Some­ti­mes that ends the product; some­ti­mes it takes the company down with it. Being caught in the default of a tech­no­logy debt is scary and trau­ma­tic, just like losing your house due to bank­ruptcy is scary and trau­ma­tic.

But the tech­no­logy debt crea­ted by copy­right locks isn’t indi­vi­dual debt; it’s syste­mic. Everyone in the world is expo­sed to this over-leve­rage, as was the case with the 2008 finan­cial crisis. When that debt comes due — when we face a cascade of secu­rity brea­ches that thre­a­ten global ship­ping and logis­tics, the food supply, phar­ma­ceu­ti­cal produc­tion pipe­li­nes, emer­gency commu­ni­ca­ti­ons, and other criti­cal systems that are accu­mu­la­ting tech­no­logy debt in part due to the presence of deli­be­ra­tely inse­cure and deli­be­ra­tely unau­di­ta­ble copy­right locks — it will indeed pose an exis­ten­tial risk.

Image for post

Image for post

Privacy and mono­poly

Many tech compa­nies are grip­ped by an ortho­doxy that holds that if they just gather enough data on enough of our acti­vi­ties, everyt­hing else is possi­ble — the mind control and endless profits. This is an unfal­si­fi­a­ble hypot­he­sis: If data gives a tech company even a tiny impro­ve­ment in beha­vior predic­tion and modi­fi­ca­tion, the company decla­res that it has taken the first step toward global domi­na­tion with no end in sight. If a company fails to attain any impro­ve­ments from gathe­ring and analy­zing data, it decla­res success to be just around the corner, attai­na­ble once more data is in hand.

Survei­llance tech is far from the first industry to embrace a nonsen­si­cal, self-serving belief that harms the rest of the world, and it is not the first industry to profit hand­so­mely from such a delu­sion. Long before hedge-fund mana­gers were clai­ming (falsely) that they could beat the S&P 500, there were plenty of other “respec­ta­ble” indus­tries that have been reve­a­led as quacks in hind­sight. From the makers of radium suppo­si­to­ries (a real thing!) to the cruel soci­o­paths who clai­med they could “cure” gay people, history is litte­red with the formerly respec­ta­ble titans of discre­di­ted indus­tries.

This is not to say that there’s nothing wrong with Big Tech and its ideo­lo­gi­cal addic­tion to data. While survei­llan­ce’s bene­fits are mostly overs­ta­ted, its harms are, if anyt­hing, unders­ta­ted.

There’s real irony here. The belief in survei­llance capi­ta­lism as a “rogue capi­ta­lism” is driven by the belief that markets wouldn’t tole­rate firms that are grip­ped by false beli­efs. An oil company that has false beli­efs about where the oil is will even­tu­ally go broke digging dry wells after all.

But mono­po­lists get to do terri­ble things for a long time before they pay the price. Think of how concen­tra­tion in the finance sector allo­wed the subprime crisis to fester as bond-rating agen­cies, regu­la­tors, inves­tors, and critics all fell under the sway of a false belief that complex mathe­ma­tics could cons­truct “fully hedged” debt instru­ments that could not possibly default. A small bank that enga­ged in this kind of malfe­a­sance would simply go broke rather than outrun­ning the inevi­ta­ble crisis, perhaps growing so big that it aver­ted it alto­get­her. But large banks were able to conti­nue to attract inves­tors, and when they finally did come a-crop­per, the world’s govern­ments bailed them out. The worst offen­ders of the subprime crisis are bigger than they were in 2008, brin­ging home more profits and paying their execs even larger sums.

Big Tech is able to prac­tice survei­llance not just because it is tech but because it is big. The reason every web publis­her embeds a Face­book “Like” button is that Face­book domi­na­tes the inter­net’s social media refer­rals — and every one of those “Like” buttons spies on everyone who lands on a page that contains them (see also: Google Analy­tics embeds, Twit­ter buttons, etc.).

The reason the world’s govern­ments have been slow to create meaning­ful penal­ties for privacy brea­ches is that Big Tech’s concen­tra­tion produ­ces huge profits that can be used to lobby against those penal­ties — and Big Tech’s concen­tra­tion means that the compa­nies invol­ved are able to arrive at a unified nego­ti­a­ting posi­tion that super­char­ges the lobbying.

The reason that the smar­test engi­ne­ers in the world want to work for Big Tech is that Big Tech commands the lion’s share of tech industry jobs.

The reason people who are aghast at Face­book’s and Google’s and Amazon’s data-hand­ling prac­ti­ces conti­nue to use these servi­ces is that all their friends are on Face­book; Google domi­na­tes search; and Amazon has put all the local merchants out of busi­ness.

Compe­ti­tive markets would weaken the compa­ni­es’ lobbying muscle by redu­cing their profits and pitting them against each other in regu­la­tory forums. It would give custo­mers other places to go to get their online servi­ces. It would make the compa­nies small enough to regu­late and pave the way to meaning­ful penal­ties for brea­ches. It would let engi­ne­ers with ideas that challen­ged the survei­llance ortho­doxy raise capi­tal to compete with the incum­bents. It would give web publis­hers multi­ple ways to reach audi­en­ces and make the case against Face­book and Google and Twit­ter embeds.

In other words, while survei­llance doesn’t cause mono­po­lies, mono­po­lies certainly abet survei­llance.

Ronald Reagan, pioneer of tech mono­po­lism

Tech­no­logy excep­ti­o­na­lism is a sin, whet­her it’s prac­ti­ced by tech­no­logy’s blind propo­nents or by its critics. Both of these camps are prone to explai­ning away mono­po­lis­tic concen­tra­tion by citing some special charac­te­ris­tic of the tech industry, like network effects or first-mover advan­tage. The only real diffe­rence between these two groups is that the tech apolo­gists say mono­poly is inevi­ta­ble so we should just let tech get away with its abuses while compe­ti­tion regu­la­tors in the U.S. and the EU say mono­poly is inevi­ta­ble so we should punish tech for its abuses but not try to break up the mono­po­lies.

To unders­tand how tech became so mono­po­lis­tic, it’s useful to look at the dawn of the consu­mer tech industry: 1979, the year the Apple II Plus laun­ched and became the first success­ful home compu­ter. That also happens to be the year that Ronald Reagan hit the campaign trail for the 1980 presi­den­tial race — a race he won, leading to a radi­cal shift in the way that anti­trust concerns are hand­led in America. Reagan’s cohort of poli­ti­ci­ans — inclu­ding Marga­ret That­cher in the U.K., Brian Mulro­ney in Canada, Helmut Kohl in Germany, and Augusto Pino­chet in Chile — went on to enact simi­lar reforms that even­tu­ally spread around the world.

Anti­trust’s story began nearly a century before all that with laws like the Sher­man Act, which took aim at mono­po­lists on the grounds that mono­po­lies were bad in and of them­sel­ves — sque­e­zing out compe­ti­tors, crea­ting “dise­co­no­mies of scale” (when a company is so big that its cons­ti­tu­ent parts go awry and it is seemingly helpless to address the problems), and captu­ring their regu­la­tors to such a degree that they can get away with a host of evils.

Then came a fabu­list named Robert Bork, a former soli­ci­tor gene­ral who Reagan appoin­ted to the power­ful U.S. Court of Appe­als for the D.C. Circuit and who had crea­ted an alter­nate legis­la­tive history of the Sher­man Act and its succes­sors out of whole cloth. Bork insis­ted that these statu­tes were never targe­ted at mono­po­lies (despite a wealth of evidence to the contrary, inclu­ding the trans­cri­bed spee­ches of the acts’ authors) but, rather, that they were inten­ded to prevent “consu­mer harm” — in the form of higher prices.

Bork was a crank, but he was a crank with a theory that rich people really liked. Mono­po­lies are a great way to make rich people richer by allo­wing them to receive “mono­poly rents” (that is, bigger profits) and capture regu­la­tors, leading to a weaker, more favo­ra­ble regu­la­tory envi­ron­ment with fewer protec­ti­ons for custo­mers, suppli­ers, the envi­ron­ment, and workers.

Bork’s theo­ries were espe­ci­ally pala­ta­ble to the same power brokers who backed Reagan, and Reagan’s Depart­ment of Justice and other agen­cies began to incor­po­rate Bork’s anti­trust doctrine into their enfor­ce­ment deci­si­ons (Reagan even put Bork up for a Supreme Court seat, but Bork flun­ked the Senate confir­ma­tion hearing so badly that, 40 years later, D.C. insi­ders use the term “borked” to refer to any catas­trop­hi­cally bad poli­ti­cal perfor­mance).

Little by little, Bork’s theo­ries ente­red the mains­tream, and their backers began to infil­trate the legal educa­tion field, even putting on junkets where members of the judi­ci­ary were trea­ted to lavish meals, fun outdoor acti­vi­ties, and semi­nars where they were indoc­tri­na­ted into the consu­mer harm theory of anti­trust. The more Bork’s theo­ries took hold, the more money the mono­po­lists were making — and the more surplus capi­tal they had at their dispo­sal to lobby for even more Borkian anti­trust influ­ence campaigns.

The history of Bork’s anti­trust theo­ries is a really good exam­ple of the kind of covertly engi­ne­e­red shifts in public opinion that Zuboff warns us against, where fringe ideas become mains­tream ortho­doxy. But Bork didn’t change the world over­night. He played a very long game, for over a gene­ra­tion, and he had a tail­wind because the same forces that backed oligar­chic anti­trust theo­ries also backed many other oligar­chic shifts in public opinion. For exam­ple, the idea that taxa­tion is theft, that wealth is a sign of virtue, and so on — all of these theo­ries meshed to form a cohe­rent ideo­logy that eleva­ted inequa­lity to a virtue.

Today, many fear that machine lear­ning allows survei­llance capi­ta­lism to sell “Bork-as-a-Service, ” at inter­net speeds, so that you can contract a machine-lear­ning company to engi­neer rapid shifts in public senti­ment without needing the capi­tal to sustain a multi­pron­ged, multi­ge­ne­ra­ti­o­nal project working at the local, state, nati­o­nal, and global levels in busi­ness, law, and philo­sophy. I do not beli­eve that such a project is plau­si­ble, though I agree that this is basi­cally what the plat­forms claim to be selling. They’re just lying about it. Big Tech lies all the time, inclu­dingin their sales lite­ra­ture.

The idea that tech forms “natu­ral mono­po­lies” (mono­po­lies that are the inevi­ta­ble result of the reali­ties of an industry, such as the mono­po­lies that accrue the first company to run long-haul phone lines or rail lines) is belied by tech’s own history: In the absence of anti-compe­ti­tive tactics, Google was able to unseat Alta­Vista and Yahoo; Face­book was able to head off Myspace. There are some advan­ta­ges to gathe­ring moun­tains of data, but those moun­tains of data also have disad­van­ta­ges: liabi­lity (from leaking), dimi­nis­hing returns (from old data), and insti­tu­ti­o­nal iner­tia (big compa­nies, like science, progress one fune­ral at a time).

Indeed, the birth of the web saw a mass-extinc­tion event for the exis­ting giant, wildly profi­ta­ble propri­e­tary tech­no­lo­gies that had capi­tal, network effects, and walls and moats surroun­ding their busi­nes­ses. The web showed that when a new industry is built around a proto­col, rather than a product, the combi­ned might of everyone who uses the proto­col to reach their custo­mers or users or commu­ni­ties outweighs even the most massive products. Compu­Serve, AOL, MSN, and a host of other propri­e­tary walled gardens lear­ned this lesson the hard way: Each beli­e­ved it could stay sepa­rate from the web, offe­ring “cura­tion” and a guaran­tee of consis­tency and quality instead of the chaos of an open system. Each was wrong and ended up being absor­bed into the public web.

Yes, tech is heavily mono­po­li­zed and is now closely asso­ci­a­ted with industry concen­tra­tion, but this has more to do with a matter of timing than its intrin­si­cally mono­po­lis­tic tenden­cies. Tech was born at the moment that anti­trust enfor­ce­ment was being dismant­led, and tech fell into exac­tly the same patho­lo­gies that anti­trust was suppo­sed to guard against. To a first appro­xi­ma­tion, it is reaso­na­ble to assume that tech’s mono­po­lies are the result of a lack of anti-mono­poly action and not the much-touted unique charac­te­ris­tics of tech, such as network effects, first-mover advan­tage, and so on.

In support of this thesis, I offer the concen­tra­tion that every otherindustry has under­gone over the same period. From profes­si­o­nal wrest­ling to consu­mer packa­ged goods to commer­cial property leasing to banking to sea freight to oil to record labels to news­pa­per owners­hip to theme parks, every industry has under­gone a massive shift toward concen­tra­tion. There’s no obvi­ous network effects or first-mover advan­tage at play in these indus­tries. Howe­ver, in every case, these indus­tries attai­ned their concen­tra­ted status through tactics that were prohi­bi­ted before Bork’s triumph: merging with major compe­ti­tors, buying out inno­va­tive new market entrants, hori­zon­tal and verti­cal inte­gra­tion, and a suite of anti-compe­ti­tive tactics that were once ille­gal but are not any longer.

Again: When you change the laws inten­ded to prevent mono­po­lies and then mono­po­lies form in exac­tly the way the law was suppo­sed to prevent, it is reaso­na­ble to suppose that these facts are rela­ted. Tech’s concen­tra­tion can be readily explai­ned without recourse to radi­cal theo­ries of network effects — but only if you’re willing to indict unre­gu­la­ted markets as tending toward mono­poly. Just as a life­long smoker can give you a hundred reasons why their smoking didn’t cause their cancer (“It was the envi­ron­men­tal toxins”), true beli­e­vers in unre­gu­la­ted markets have a whole suite of uncon­vin­cing expla­na­ti­ons for mono­poly in tech that leave capi­ta­lism intact.

Stee­ring with the winds­hi­eld wipers

It’s been 40 years since Bork’s project to reha­bi­li­tate mono­po­lies achi­e­ved liftoff, and that is a gene­ra­tion and a half, which is plenty of time to take a common idea and make it seem outlan­dish and vice versa. Before the 1940s, afflu­ent Ameri­cans dres­sed their baby boys in pink while baby girls wore blue (a “deli­cate and dainty” color). While gende­red colors are obvi­ously totally arbi­trary, many still greet this news with amaze­ment and find it hard to imagine a time when pink conno­ted mascu­li­nity.

After 40 years of studi­ously igno­ring anti­trust analy­sis and enfor­ce­ment, it’s not surpri­sing that we’ve all but forgot­ten that anti­trust exists, that in living memory, growth through mergers and acqui­si­ti­ons were largely prohi­bi­ted under law, that market-corne­ring stra­te­gies like verti­cal inte­gra­tion could land a company in court.

Anti­trust is a market soci­ety’s stee­ring wheel, the control of first resort to keep would-be masters of the universe in their lanes. But Bork and his cohort ripped out our stee­ring wheel 40 years ago. The car is still barre­ling along, and so we’re yanking as hard as we can on all the othercontrols in the car as well as despe­ra­tely flap­ping the doors and rolling the windows up and down in the hopes that one of these other controls can be repur­po­sed to let us choose where we’re heading before we careen off a cliff.

It’s like a 1960s science-fiction plot come to life: People stuck in a “gene­ra­tion ship, ” plying its way across the stars, a ship once pilo­ted by their ances­tors; and now, after a great cataclysm, the ship’s crew have forgot­ten that they’re in a ship at all and no longer remem­ber where the control room is. Adrift, the ship is racing toward its extinc­tion, and unless we can seize the controls and execute emer­gency course correc­tion, we’re all headed for a fiery death in the heart of a sun.

Image for post

Image for post

Survei­llance still matters

None of this is to mini­mize the problems with survei­llance. Survei­llance matters, and Big Tech’s use of survei­llance is an exis­ten­tial risk to our species, but that’s not because survei­llance and machine lear­ning rob us of our free will.

Survei­llance has become much more effi­ci­ent thanks to Big Tech. In 1989, the Stasi — the East German secret police — had the whole country under survei­llance, a massive under­ta­king that recrui­ted one out of every 60 people to serve as an infor­mant or inte­lli­gence opera­tive.

Today, we know that the NSA is spying on a signi­fi­cant frac­tion of the entire world’s popu­la­tion, and its ratio of survei­llance opera­ti­ves to the survei­lled is more like 1:10,000 (that’s probably on the low side since it assu­mes that every Ameri­can with top-secret clea­rance is working for the NSA on this project — we don’t know how many of those clea­red people are invol­ved in NSA spying, but it’s defi­ni­tely not all of them).

How did the ratio of survei­lla­ble citi­zens expand from 1:60 to 1:10,000 in less than 30 years? It’s thanks to Big Tech. Our devi­ces and servi­ces gather most of the data that the NSA mines for its survei­llance project. We pay for these devi­ces and the servi­ces they connect to, and then we pains­ta­kingly perform the data-entry tasks asso­ci­a­ted with logging facts about our lives, opini­ons, and prefe­ren­ces. This mass survei­llance project has been largely useless for figh­ting terro­rism: The NSA can only point to a single minor success story in which it used its data collec­tion program to foil an attempt by a U.S. resi­dent to wire a few thou­sand dollars to an over­seas terror group. It’s inef­fec­tive for much the same reason that commer­cial survei­llance projects are largely inef­fec­tive at targe­ting adver­ti­sing: The people who want to commit acts of terror, like people who want to buy a refri­ge­ra­tor, are extre­mely rare. If you’re trying to detect a pheno­me­non whose base rate is one in a million with an instru­ment whose accu­racy is only 99%, then every true posi­tive will come at the cost of 9,999 false posi­ti­ves.

Let me explain that again: If one in a million people is a terro­rist, then there will only be about one terro­rist in a random sample of one million people. If your test for detec­ting terro­rists is 99% accu­rate, it will iden­tify 10,000 terro­rists in your million-person sample (1% of one million is 10,000). For every true posi­tive, you’ll get 9,999 false posi­ti­ves.

In reality, the accu­racy of algo­rith­mic terro­rism detec­tion falls far short of the 99% mark, as does refri­ge­ra­tor ad targe­ting. The diffe­rence is that being falsely accu­sed of wanting to buy a fridge is a minor nuisance while being falsely accu­sed of plan­ning a terror attack can destroy your life and the lives of everyone you love.

Mass state survei­llance is only feasi­ble because of survei­llance capi­ta­lism and its extre­mely low-yield ad-targe­ting systems, which require a cons­tant feed of perso­nal data to remain barely viable. Survei­llance capi­ta­lism’s primary failure mode is mistar­ge­ted ads while mass state survei­llan­ce’s primary failure mode is grotes­que human rights abuses, tending toward tota­li­ta­ri­a­nism.

State survei­llance is no mere para­site on Big Tech, sucking up its data and giving nothing in return. In truth, the two are symbi­o­tes: Big Tech sucks up our data for spy agen­cies, and spy agen­cies ensure that govern­ments don’t limit Big Tech’s acti­vi­ties so seve­rely that it would no longer serve the spy agen­ci­es’ needs. There is no firm distinc­tion between state survei­llance and survei­llance capi­ta­lism; they are depen­dent on one anot­her.

To see this at work today, look no further than Amazon’s home survei­llance device, the Ring door­bell, and its asso­ci­a­ted app, Neigh­bors. Ring — a product that Amazon acqui­red and did not deve­lop in house — makes a camera-enabled door­bell that stre­ams footage from your front door to your mobile device. The Neigh­bors app allows you to form a neigh­bor­hood-wide survei­llance grid with your fellow Ring owners through which you can share clips of “suspi­ci­ous charac­ters.” If you’re thin­king that this sounds like a recipe for letting curtain-twit­ching racists super­charge their suspi­ci­ons of people with brown skin who walk down their blocks, you’re right. Ring has become a de facto, off-the-books arm of the police without any of the pesky over­sight or rules.

In mid-2019, a series of public records requests reve­a­led that Amazon had struck confi­den­tial deals with more than 400 local law enfor­ce­ment agen­cies through which the agen­cies would promote Ring and Neigh­bors and in exchange get access to footage from Ring came­ras. In theory, cops would need to request this footage through Amazon (and inter­nal docu­ments reveal that Amazon devo­tes subs­tan­tial resour­ces to coaching cops on how to spin a convin­cing story when doing so), but in prac­tice, when a Ring custo­mer turns down a police request, Amazon only requi­res the agency to formally request the footage from the company, which it will then produce.

Ring and law enfor­ce­ment have found many ways to inter­t­wine their acti­vi­ties. Ring stri­kes secret deals to acquire real-time access to 911 dispatch and then stre­ams alar­ming crime reports to Neigh­bors users, which serve as convin­cers for anyone who’s contem­pla­ting a survei­llance door­bell but isn’t sure whet­her their neigh­bor­hood is dange­rous enough to warrant it.

The more the cops buzz-market the survei­llance capi­ta­list Ring, the more survei­llance capa­bi­lity the state gets. Cops who rely on private enti­ties for law-enfor­ce­ment roles then brief against any controls on the deploy­ment of that tech­no­logy while the compa­nies return the favor by lobbying against rules requi­ring public over­sight of police survei­llance tech­no­logy. The more the cops rely on Ring and Neigh­bors, the harder it will be to pass laws to curb them. The fewer laws there are against them, the more the cops will rely on them.

Dignity and sanc­tu­ary

But even if we could exer­cise demo­cra­tic control over our states and force them to stop raiding survei­llance capi­ta­lism’s reser­voirs of beha­vi­o­ral data, survei­llance capi­ta­lism would still harm us.

This is an area where Zuboff shines. Her chap­ter on “sanc­tu­ary” — the feeling of being unob­ser­ved — is a beau­ti­ful hymn to intros­pec­tion, calm­ness, mind­ful­ness, and tran­qui­lity.

When you are watched, somet­hing chan­ges. Anyone who has ever raised a child knows this. You might look up from your book (or more realis­ti­cally, from your phone) and catch your child in a moment of profound reali­za­tion and growth, a moment where they are lear­ning somet­hing that is right at the edge of their abili­ties, requi­ring their entire fero­ci­ous concen­tra­tion. For a moment, you’re trans­fi­xed, watching that rare and beau­ti­ful moment of focus playing out before your eyes, and then your child looks up and sees you seeing them, and the moment collap­ses. To grow, you need to be and expose your authen­tic self, and in that moment, you are vulne­ra­ble like a hermit crab scutt­ling from one shell to the next. The tender, unpro­tec­ted tissues you expose in that moment are too deli­cate to reveal in the presence of anot­her, even some­one you trust as impli­citly as a child trusts their parent.

In the digi­tal age, our authen­tic selves are inex­tri­cably tied to our digi­tal lives. Your search history is a running ledger of the ques­ti­ons you’ve ponde­red. Your loca­tion history is a record of the places you’ve sought out and the expe­ri­en­ces you’ve had there. Your social graph reve­als the diffe­rent facets of your iden­tity, the people you’ve connec­ted with.

To be obser­ved in these acti­vi­ties is to lose the sanc­tu­ary of your authen­tic self.

There’s anot­her way in which survei­llance capi­ta­lism robs us of our capa­city to be our authen­tic selves: by making us anxi­ous. Survei­llance capi­ta­lism isn’t really a mind-control ray, but you don’t need a mind-control ray to make some­one anxi­ous. After all, anot­her word for anxi­ety is agita­tion, and to make some­one expe­ri­ence agita­tion, you need merely to agitate them. To poke them and prod them and beep at them and buzz at them and bombard them on an inter­mit­tent sche­dule that is just random enough that our limbic systems never quite become inured to it.

Our devi­ces and servi­ces are “gene­ral purpose” in that they can connect anyt­hing or anyone to anyt­hing or anyone else and that they can run any program that can be writ­ten. This means that the distrac­tion rectan­gles in our pockets hold our most preci­ous moments with our most belo­ved people and their most urgent or time-sensi­tive commu­ni­ca­ti­ons (from “running late can you get the kid?” to “doctor gave me bad news and I need to talk to you RIGHT NOW”) as well as ads for refri­ge­ra­tors and recrui­ting messa­ges from Nazis.

All day and all night, our pockets buzz, shat­te­ring our concen­tra­tion and tearing apart the fragile webs of connec­tion we spin as we think through diffi­cult ideas. If you locked some­one in a cell and agita­ted them like this, we’d call it “sleep depri­va­tion torture, ” and it would be a war crime under the Geneva Conven­ti­ons.

Afflic­ting the afflic­ted

The effects of survei­llance on our ability to be our authen­tic selves are not equal for all people. Some of us are lucky enough to live in a time and place in which all the most impor­tant facts of our lives are widely and roundly soci­ally accep­ta­ble and can be publicly displayed without the risk of social conse­quence.

But for many of us, this is not true. Recall that in living memory, many of the ways of being that we think of as soci­ally accep­ta­ble today were once cause for dire social sanc­tion or even impri­son­ment. If you are 65 years old, you have lived through a time in which people living in “free soci­e­ties” could be impri­so­ned or sanc­ti­o­ned for enga­ging in homo­se­xual acti­vity, for falling in love with a person whose skin was a diffe­rent color than their own, or for smoking weed.

Today, these acti­vi­ties aren’t just decri­mi­na­li­zed in much of the world, they’re consi­de­red normal, and the fallen prohi­bi­ti­ons are viewed as shame­ful, regret­ta­ble relics of the past.

How did we get from prohi­bi­tion to norma­li­za­tion? Through private, perso­nal acti­vity: People who were secretly gay or secret pot-smokers or who secretly loved some­one with a diffe­rent skin color were vulne­ra­ble to reta­li­a­tion if they made their true selves known and were limi­ted in how much they could advo­cate for their own right to exist in the world and be true to them­sel­ves. But because there was a private sphere, these people could form alli­an­ces with their friends and loved ones who did not share their disfa­vo­red traits by having private conver­sa­ti­ons in which they came out, disclo­sing their true selves to the people around them and brin­ging them to their cause one conver­sa­tion at a time.

The right to choose the time and manner of these conver­sa­ti­ons was key to their success. It’s one thing to come out to your dad while you’re on a fishing trip away from the world and anot­her thing enti­rely to blurt it out over the Christ­mas dinner table while your racist Face­book uncle is there to make a scene.

Without a private sphere, there’s a chance that none of these chan­ges would have come to pass and that the people who bene­fi­ted from these chan­ges would have either faced social sanc­tion for coming out to a hostile world or would have never been able to reveal their true selves to the people they love.

The coro­llary is that, unless you think that our soci­ety has attai­ned social perfec­tion — that your grand­chil­dren in 50 years will ask you to tell them the story of how, in 2019, every injus­tice had been righ­ted and no further change had to be made — then you should expect that right now, at this minute, there are people you love, whose happi­ness is key to your own, who have a secret in their hearts that stops them from ever being their authen­tic selves with you. These people are sorro­wing and will go to their graves with that secret sorrow in their hearts, and the source of that sorrow will be the falsity of their rela­ti­ons­hip to you.

A private realm is neces­sary for human progress.

Image for post

Image for post

Any data you collect and retain will even­tu­ally leak

The lack of a private life can rob vulne­ra­ble people of the chance to be their authen­tic selves and cons­train our acti­ons by depri­ving us of sanc­tu­ary, but there is anot­her risk that is borne by everyone, not just people with a secret: crime.

Perso­nally iden­tifying infor­ma­tion is of very limi­ted use for the purpose of contro­lling peoples’ minds, but iden­tity theft — really a catchall term for a whole cons­te­lla­tion of terri­ble crimi­nal acti­vi­ties that can destroy your finan­ces, compro­mise your perso­nal inte­grity, ruin your repu­ta­tion, or even expose you to physi­cal danger — thri­ves on it.

Attac­kers are not limi­ted to using data from one brea­ched source, either. Multi­ple servi­ces have suffe­red brea­ches that expo­sed names, addres­ses, phone numbers, pass­words, sexual tastes, school grades, work perfor­mance, brus­hes with the crimi­nal justice system, family details, gene­tic infor­ma­tion, finger­prints and other biome­trics, reading habits, search histo­ries, lite­rary tastes, pseu­dony­mous iden­ti­ties, and other sensi­tive infor­ma­tion. Attac­kers can merge data from these diffe­rent brea­ches to build up extre­mely detai­led dossi­ers on random subjects and then use diffe­rent parts of the data for diffe­rent crimi­nal purpo­ses.

For exam­ple, attac­kers can use leaked user­name and pass­word combi­na­ti­ons to hijack whole fleets of commer­cial vehi­cles that have been fitted with anti-theft GPS trac­kers and immo­bi­li­zers or to hijack baby moni­tors in order to terro­rize todd­lers with the audio tracks from porno­graphy. Attac­kers use leaked data to trick phone compa­nies into giving them your phone number, then they inter­cept SMS-based two-factor authen­ti­ca­tion codes in order to take over your email, bank account, and/or cryp­to­cur­rency wallets.

Attac­kers are endlessly inven­tive in the pursuit of crea­tive ways to weapo­nize leaked data. One common use of leaked data is to pene­trate compa­nies in order to access more data.

Like spies, online frauds­ters are totally depen­dent on compa­nies over-collec­ting and over-retai­ning our data. Spy agen­cies some­ti­mes pay compa­nies for access to their data or inti­mi­date them into giving it up, but some­ti­mes they work just like crimi­nals do — by snea­king data out of compa­ni­es’ data­ba­ses.

The over-collec­tion of data has a host of terri­ble social conse­quen­ces, from the erosion of our authen­tic selves to the under­mi­ning of social progress, from state survei­llance to an epide­mic of online crime. Commer­cial survei­llance is also a boon to people running influ­ence campaigns, but that’s the least of our trou­bles.

Image for post

Image for post

Criti­cal tech excep­ti­o­na­lism is still tech excep­ti­o­na­lism

Big Tech has long prac­ti­ced tech­no­logy excep­ti­o­na­lism: the idea that it should not be subject to the mundane laws and norms of “meats­pace.” Mottoes like Face­book’s “move fast and break things” attrac­ted justi­fi­a­ble scorn of the compa­ni­es’ self-serving rheto­ric.

Tech excep­ti­o­na­lism got us all into a lot of trou­ble, so it’s ironic and distres­sing to see Big Tech’s critics commit­ting the same sin.

Big Tech is not a “rogue capi­ta­lism” that cannot be cured through the tradi­ti­o­nal anti-mono­poly reme­dies of trust­bus­ting (forcing compa­nies to divest of compe­ti­tors they have acqui­red) and bans on mergers to mono­poly and other anti-compe­ti­tive tactics. Big Tech does not have the power to use machine lear­ning to influ­ence our beha­vior so thoroughly that markets lose the ability to punish bad actors and reward supe­rior compe­ti­tors. Big Tech has no rule-writing mind-control ray that neces­si­ta­tes ditching our old tool­box.

The thing is, people have been clai­ming to have perfec­ted mind-control rays for centu­ries, and every time, it turned out to be a con — though some­ti­mes the con artists were also conning them­sel­ves.

For gene­ra­ti­ons, the adver­ti­sing industry has been stea­dily impro­ving its ability to sell adver­ti­sing servi­ces to busi­nes­ses while only making margi­nal gains in selling those busi­nes­ses’ products to pros­pec­tive custo­mers. John Wana­ma­ker’s lament that “50% of my adver­ti­sing budget is wasted, I just don’t know which 50%” is a testa­ment to the triumph of ad execu­ti­ves, who success­fully convin­ced Wana­ma­ker that only half of the money he spent went to waste.

The tech industry has made enor­mous impro­ve­ments in the science of convin­cing busi­nes­ses that they’re good at adver­ti­sing while their actual impro­ve­ments to adver­ti­sing — as oppo­sed to targe­ting — have been pretty ho-hum. The vogue for machine lear­ning — and the mysti­cal invo­ca­tion of “arti­fi­cial inte­lli­gence” as a synonym for straight­for­ward statis­ti­cal infe­rence tech­ni­ques — has greatly boos­ted the effi­cacy of Big Tech’s sales pitch as marke­ters have exploi­ted poten­tial custo­mers’ lack of tech­ni­cal sophis­ti­ca­tion to get away with breath­ta­king acts of over­pro­mi­sing and under­de­li­ve­ring.

It’s temp­ting to think that if busi­nes­ses are willing to pour billi­ons into a venture that the venture must be a good one. Yet there are plenty of times when this rule of thumb has led us astray. For exam­ple, it’s virtu­ally unhe­ard of for mana­ged invest­ment funds to outper­form simple index funds, and inves­tors who put their money into the hands of expert money mana­gers overw­hel­mingly fare worse than those who entrust their savings to index funds. But mana­ged funds still account for the majo­rity of the money inves­ted in the markets, and they are patro­ni­zed by some of the richest, most sophis­ti­ca­ted inves­tors in the world. Their vote of confi­dence in an under­per­for­ming sector is a para­ble about the role of luck in wealth accu­mu­la­tion, not a sign that mana­ged funds are a good buy.

The claims of Big Tech’s mind-control system are full of tells that the enter­prise is a con. For exam­ple, the reli­ance on the “Big Five” perso­na­lity traits as a primary means of influ­en­cing people even though the “Big Five” theory is unsup­por­ted by any large-scale, peer-revi­e­wed studies and is mostly the realm of marke­ting hucks­ters and pop psych.

Big Tech’s promo­ti­o­nal mate­ri­als also claim that their algo­rithms can accu­ra­tely perform “senti­ment analy­sis” or detect peoples’ moods based on their “micro­ex­pres­si­ons, ” but these are marke­ting claims, not scien­ti­fic ones. These methods are largely untes­ted by inde­pen­dent scien­ti­fic experts, and where they have been tested, they’ve been found sorely wanting. Micro­ex­pres­si­ons are parti­cu­larly suspect as the compa­nies that speci­a­lize in trai­ning people to detect them have been shown to under­per­form rela­tive to random chance.

Big Tech has been so good at marke­ting its own suppo­sed super­po­wers that it’s easy to beli­eve that they can market everyt­hing else with simi­lar acumen, but it’s a mistake to beli­eve the hype. Any state­ment a company makes about the quality of its products is clearly not impar­tial. The fact that we distrust all the things that Big Tech says about its data hand­ling, compli­ance with privacy laws, etc., is only reaso­na­ble — but why on Earth would we treat Big Tech’s marke­ting lite­ra­ture as the gospel truth? Big Tech lies about just about everyt­hing, inclu­ding how well its machine-lear­ning fueled persu­a­sion systems work.

That skep­ti­cism should infuse all of our evalu­a­ti­ons of Big Tech and its suppo­sed abili­ties, inclu­ding our peru­sal of its patents. Zuboff vests these patents with enor­mous signi­fi­cance, poin­ting out that Google clai­med exten­sive new persu­a­sion capa­bi­li­ties in its patent filings. These claims are doubly suspect: first, because they are so self-serving, and second, because the patent itself is so noto­ri­ously an invi­ta­tion to exag­ge­ra­tion.

Patent appli­ca­ti­ons take the form of a series of claims and range from broad to narrow. A typi­cal patent starts out by clai­ming that its authors have inven­ted a method or system for doing every concei­va­ble thing that anyone might do, ever, with any tool or device. Then it narrows that claim in succes­sive stages until we get to the actual “inven­tion” that is the true subject of the patent. The hope is that the patent exami­ner — who is almost certainly over­wor­ked and unde­rin­for­med — will miss the fact that some or all of these claims are ridi­cu­lous, or at least suspect, and grant the patent’s broa­der claims. Patents for unpa­ten­ta­ble things are still incre­dibly useful because they can be wiel­ded against compe­ti­tors who might license that patent or steer clear of its claims rather than endure the lengthy, expen­sive process of contes­ting it.

What’s more, soft­ware patents are routi­nely gran­ted even though the filer doesn’t have any evidence that they can do the thing clai­med by the patent. That is, you can patent an “inven­tion” that you haven’t actu­ally made and that you don’t know how to make.

With these consi­de­ra­ti­ons in hand, it beco­mes obvi­ous that the fact that a Big Tech company has paten­ted what it says is an effec­tive mind-control ray is largely irre­le­vant to whet­her Big Tech can in fact control our minds.

Big Tech collects our data for many reasons, inclu­ding the dimi­nis­hing returns on exis­ting stores of data. But many tech compa­nies also collect data out of a mista­ken tech excep­ti­o­na­list belief in the network effects of data. Network effects occur when each new user in a system incre­a­ses its value. The clas­sic exam­ple is fax machi­nes: A single fax machine is of no use, two fax machi­nes are of limi­ted use, but every new fax machine that’s put to use after the first doubles the number of possi­ble fax-to-fax links.

Data mined for predic­tive systems doesn’t neces­sa­rily produce these divi­dends. Think of Netflix: The predic­tive value of the data mined from a million English-spea­king Netflix viewers is hardly impro­ved by the addi­tion of one more user’s viewing data. Most of the data Netflix acqui­res after that first mini­mum viable sample dupli­ca­tes exis­ting data and produ­ces only mini­mal gains. Meanw­hile, retrai­ning models with new data gets progres­si­vely more expen­sive as the number of data points incre­a­ses, and manual tasks like labe­ling and vali­da­ting data do not get chea­per at scale.

Busi­nes­ses pursue fads to the detri­ment of their profits all the time, espe­ci­ally when the busi­nes­ses and their inves­tors are not moti­va­ted by the pros­pect of beco­ming profi­ta­ble but rather by the pros­pect of being acqui­red by a Big Tech giant or by having an IPO. For these firms, ticking faddish boxes like “collects as much data as possi­ble” might realize a bigger return on invest­ment than “collects a busi­ness-appro­pri­ate quan­tity of data.”

This is anot­her harm of tech excep­ti­o­na­lism: The belief that more data always produ­ces more profits in the form of more insights that can be trans­la­ted into better mind-control rays drives firms to over-collect and over-retain data beyond all rati­o­na­lity. And since the firms are beha­ving irra­ti­o­nally, a good number of them will go out of busi­ness and become ghost ships whose cargo holds are stuf­fed full of data that can harm people in myriad ways — but which no one is respon­si­ble for antey longer. Even if the compa­nies don’t go under, the data they collect is main­tai­ned behind the mini­mum viable secu­rity — just enough secu­rity to keep the company viable while it waits to get bought out by a tech giant, an amount calcu­la­ted to spend not one penny more than is neces­sary on protec­ting data.

Image for post

Image for post

How mono­po­lies, not mind control, drive survei­llance capi­ta­lism: The Snap­chat story

For the first decade of its exis­tence, Face­book compe­ted with the social media giants of the day (Myspace, Orkut, etc.) by presen­ting itself as the pro-privacy alter­na­tive. Indeed, Face­book justi­fied its walled garden — which let users bring in data from the web but bloc­ked web servi­ces like Google Search from inde­xing and caching Face­book pages — as a pro-privacy measure that protec­ted users from the survei­llance-happy winners of the social media wars like Myspace.

Despite frequent promi­ses that it would never collect or analyze its users’ data, Face­book peri­o­di­cally crea­ted initi­a­ti­ves that did just that, like the creepy, ham-fisted Beacon tool, which spied on you as you moved around the web and then added your online acti­vi­ties to your public time­line, allo­wing your friends to moni­tor your brow­sing habits. Beacon spar­ked a user revolt. Every time, Face­book backed off from its survei­llance initi­a­tive, but not all the way; inevi­tably, the new Face­book would be more survei­lling than the old Face­book, though not quite as survei­lling as the inter­me­di­ate Face­book follo­wing the launch of the new product or service.

The pace at which Face­book ramped up its survei­llance efforts seems to have been set by Face­book’s compe­ti­tive lands­cape. The more compe­ti­tors Face­book had, the better it beha­ved. Every time a major compe­ti­tor foun­de­red, Face­book’s beha­vior got markedly worse.

All the while, Face­book was prodi­gi­ously acqui­ring compa­nies, inclu­ding a company called Onavo. Nomi­nally, Onavo made a battery-moni­to­ring mobile app. But the permis­si­ons that Onavo requi­red were so expan­sive that the app was able to gather fine-grai­ned tele­metry on everyt­hing users did with their phones, inclu­ding which apps they used and how they were using them.

Through Onavo, Face­book disco­ve­red that it was losing market share to Snap­chat, an app that — like Face­book a decade before — billed itself as the pro-privacy alter­na­tive to the status quo. Through Onavo, Face­book was able to mine data from the devi­ces of Snap­chat users, inclu­ding both current and former Snap­chat users. This spur­red Face­book to acquire Insta­gram — some featu­res of which compe­ted with Snap­chat — and then allo­wed Face­book to fine-tune Insta­gram’s featu­res and sales pitch to erode Snap­chat’s gains and ensure that Face­book would not have to face the kinds of compe­ti­tive pres­su­res it had earlier inflic­ted on Myspace and Orkut.

The story of how Face­book crus­hed Snap­chat reve­als the rela­ti­ons­hip between mono­poly and survei­llance capi­ta­lism. Face­book combi­ned survei­llance with lax anti­trust enfor­ce­ment to spot the compe­ti­tive threat of Snap­chat on its hori­zon and then take deci­sive action against it. Face­book’s survei­llance capi­ta­lism let it avert compe­ti­tive pres­sure with anti-compe­ti­tive tactics. Face­book users still want privacy — Face­book hasn’t used survei­llance to brain­wash them out of it — but they can’t get it because Face­book’s survei­llance lets it destroy any hope of a rival service emer­ging that compe­tes on privacy featu­res.

A mono­poly over your friends

A decen­tra­li­za­tion move­ment has tried to erode the domi­nance of Face­book and other Big Tech compa­nies by fiel­ding “indi­e­web” alter­na­ti­ves — Masto­don as a Twit­ter alter­na­tive, Dias­pora as a Face­book alter­na­tive, etc. — but these efforts have failed to attain any kind of liftoff.

Funda­men­tally, each of these servi­ces is hams­trung by the same problem: Every poten­tial user for a Face­book or Twit­ter alter­na­tive has to convince all their friends to follow them to a decen­tra­li­zed web alter­na­tive in order to conti­nue to realize the bene­fit of social media. For many of us, the only reason to have a Face­book account is that our friends have Face­book accounts, and the reason they have Face­book accounts is that we have Face­book accounts.

All of this has cons­pi­red to make Face­book — and other domi­nant plat­forms — into “kill zones” that inves­tors will not fund new entrants for.

And yet, all of today’s tech giants came into exis­tence despite the entren­ched advan­tage of the compa­nies that came before them. To unders­tand how that happe­ned, you have to unders­tand both inter­o­pe­ra­bi­lity and adver­sa­rial inter­o­pe­ra­bi­lity.

The hard problem of our species is coor­di­na­tion.

“Inter­o­pe­ra­bi­lity” is the ability of two tech­no­lo­gies to work with one anot­her: Anyone can make an LP that will play on any record player, anyone can make a filter you can install in your stove’s extrac­tor fan, anyone can make gaso­line for your car, anyone can make a USB phone char­ger that fits in your car’s ciga­rette ligh­ter recep­ta­cle, anyone can make a light bulb that works in your light socket, anyone can make bread that will toast in your toas­ter.

Inter­o­pe­ra­bi­lity is often a source of inno­va­tion and consu­mer bene­fit: Apple made the first commer­ci­ally success­ful PC, but milli­ons of inde­pen­dent soft­ware vendors made inter­o­pe­ra­ble programs that ran on the Apple II Plus. The simple analog antenna inputs on the back of TVs first allo­wed cable opera­tors to connect directly to TVs, then they allo­wed game console compa­nies and then perso­nal compu­ter compa­nies to use stan­dard tele­vi­si­ons as displays. Stan­dard RJ-45 telep­hone jacks allo­wed for the produc­tion of phones from a vari­ety of vendors in a vari­ety of forms, from the free foot­ball-shaped phone that came with a Sports Illus­tra­ted subs­crip­tion to busi­ness phones with spea­kers, hold func­ti­ons, and so on and then answe­ring machi­nes and finally modems, paving the way for the inter­net revo­lu­tion.

“Inter­o­pe­ra­bi­lity” is often used inter­chan­ge­ably with “stan­dar­di­za­tion, ” which is the process when manu­fac­tu­rers and other stake­hol­ders hammer out a set of agreed-upon rules for imple­men­ting a tech­no­logy, such as the elec­tri­cal plug on your wall, the CAN bus used by your car’s compu­ter systems, or the HTML instruc­ti­ons that your brow­ser inter­prets.

But inter­o­pe­ra­bi­lity doesn’t require stan­dar­di­za­tion — indeed, stan­dar­di­za­tion often proce­eds from the chaos of ad hoc inter­o­pe­ra­bi­lity measu­res. The inven­tor of the ciga­rette-ligh­ter USB char­ger didn’t need to get permis­sion from car manu­fac­tu­rers or even the manu­fac­tu­rers of the dash­bo­ard ligh­ter subcom­po­nent. The auto­ma­kers didn’t take any coun­ter­me­a­su­res to prevent the use of these after­mar­ket acces­so­ries by their custo­mers, but they also didn’t do anyt­hing to make life easier for the char­gers’ manu­fac­tu­rers. This is a kind of “neutral inter­o­pe­ra­bi­lity.”

Beyond neutral inter­o­pe­ra­bi­lity, there is “adver­sa­rial inter­o­pe­ra­bi­lity.” That’s when a manu­fac­tu­rer makes a product that inter­o­pe­ra­tes with anot­her manu­fac­tu­rer’s product despite the second manu­fac­tu­rer’s objec­ti­ons and even if that means bypas­sing a secu­rity system desig­ned to prevent inter­o­pe­ra­bi­lity.

Probably the most fami­liar form of adver­sa­rial inter­o­pe­ra­bi­lity is third-party prin­ter ink. Prin­ter manu­fac­tu­rers claim that they sell prin­ters below cost and that the only way they can recoup the losses they incur is by char­ging high markups on ink. To prevent the owners of prin­ters from buying ink elsew­here, the prin­ter compa­nies deploy a suite of anti-custo­mer secu­rity systems that detect and reject both refi­lled and third-party cartrid­ges.

Owners of prin­ters take the posi­tion that HP and Epson and Brot­her are not chari­ties and that custo­mers for their wares have no obli­ga­tion to help them survive, and so if the compa­nies choose to sell their products at a loss, that’s their foolish choice and their conse­quen­ces to live with. Like­wise, compe­ti­tors who make ink or refill kits observe that they don’t owe prin­ter compa­nies anyt­hing, and their erosion of prin­ter compa­ni­es’ margins are the prin­ter compa­ni­es’ problems, not their compe­ti­tors’. After all, the prin­ter compa­nies shed no tears when they drive a refi­ller out of busi­ness, so why should the refi­llers concern them­sel­ves with the econo­mic fortu­nes of the prin­ter compa­nies?

Adver­sa­rial inter­o­pe­ra­bi­lity has played an outsi­zed role in the history of the tech industry: from the foun­ding of the “alt.*” Usenet hierar­chy (which was star­ted against the wishes of Usenet’s main­tai­ners and which grew to be bigger than all of Usenet combi­ned) to the brow­ser wars (when Nets­cape and Micro­soft devo­ted massive engi­ne­e­ring efforts to making their brow­sers incom­pa­ti­ble with the other’s special commands and pecca­di­lloes) to Face­book (whose success was built in part by helping its new users stay in touch with friends they’d left behind on Myspace because Face­book supplied them with a tool that scra­ped waiting messa­ges from Myspace and impor­ted them into Face­book, effec­ti­vely crea­ting an Face­book-based Myspace reader).

Today, incum­bency is seen as an unas­sai­la­ble advan­tage. Face­book is where all of your friends are, so no one can start a Face­book compe­ti­tor. But adver­sa­rial compa­ti­bi­lity rever­ses the compe­ti­tive advan­tage: If you were allo­wed to compete with Face­book by provi­ding a tool that impor­ted all your users’ waiting Face­book messa­ges into an envi­ron­ment that compe­ted on lines that Face­book couldn’t cross, like elimi­na­ting survei­llance and ads, then Face­book would be at a huge disad­van­tage. It would have assem­bled all possi­ble ex-Face­book users into a single, easy-to-find service; it would have educa­ted them on how a Face­book-like service worked and what its poten­tial bene­fits were; and it would have provi­ded an easy means for disgrunt­led Face­book users to tell their friends where they might expect better treat­ment.

Adver­sa­rial inter­o­pe­ra­bi­lity was once the norm and a key contri­bu­tor to the dyna­mic, vibrant tech scene, but now it is stuck behind a thic­ket of laws and regu­la­ti­ons that add legal risks to the tried-and-true tactics of adver­sa­rial inter­o­pe­ra­bi­lity. New rules and new inter­pre­ta­ti­ons of exis­ting rules mean that a would-be adver­sa­rial inter­o­pe­ra­tor needs to steer clear of claims under copy­right, terms of service, trade secrecy, torti­ous inter­fe­rence, and patent.

In the absence of a compe­ti­tive market, lawma­kers have resor­ted to assig­ning expen­sive, state-like duties to Big Tech firms, such as auto­ma­ti­cally filte­ring user contri­bu­ti­ons for copy­right infrin­ge­ment or terro­rist and extre­mist content or detec­ting and preven­ting harass­ment in real time or contro­lling access to sexual mate­rial.

These measu­res put a floor under how small we can make Big Tech because only the very largest compa­nies can afford the humans and auto­ma­ted filters needed to perform these duties.

But that’s not the only way in which making plat­forms respon­si­ble for poli­cing their users under­mi­nes compe­ti­tion. A plat­form that is expec­ted to police its users’ conduct must prevent many vital adver­sa­rial inter­o­pe­ra­bi­lity tech­ni­ques lest these subvert its poli­cing measu­res. For exam­ple, if some­one using a Twit­ter repla­ce­ment like Masto­don is able to push messa­ges into Twit­ter and read messa­ges out of Twit­ter, they could avoid being caught by auto­ma­ted systems that detect and prevent harass­ment (such as systems that use the timing of messa­ges or IP-based rules to make gues­ses about whet­her some­one is a haras­ser).

To the extent that we are willing to let Big Tech police itself — rather than making Big Tech small enough that users can leave bad plat­forms for better ones and small enough that a regu­la­tion that simply puts a plat­form out of busi­ness will not destroy billi­ons of users’ access to their commu­ni­ties and data — we build the case that Big Tech should be able to block its compe­ti­tors and make it easier for Big Tech to demand legal enfor­ce­ment tools to ban and punish attempts at adver­sa­rial inter­o­pe­ra­bi­lity.

Ulti­ma­tely, we can try to fix Big Tech by making it respon­si­ble for bad acts by its users, or we can try to fix the inter­net by cutting Big Tech down to size. But we can’t do both. To replace today’s giant products with plura­lis­tic proto­cols, we need to clear the legal thic­ket that prevents adver­sa­rial inter­o­pe­ra­bi­lity so that tomor­row’s nimble, perso­nal, small-scale products can fede­rate them­sel­ves with giants like Face­book, allo­wing the users who’ve left to conti­nue to commu­ni­cate with users who haven’t left yet, reaching tendrils over Face­book’s garden wall that Face­book’s trap­ped users can use to scale the walls and escape to the global, open web.

Fake news is an epis­te­mo­lo­gi­cal crisis

Tech is not the only industry that has under­gone massive concen­tra­tion since the Reagan era. Virtu­ally every major industry — from oil to news­pa­pers to meat­pac­king to sea freight to eyewear to online porno­graphy — has become a clubby oligar­chy that just a few players domi­nate.

At the same time, every industry has become somet­hing of a tech industry as gene­ral-purpose compu­ters and gene­ral-purpose networks and the promise of effi­ci­en­cies through data-driven analy­sis infuse every device, process, and firm with tech.

This pheno­me­non of indus­trial concen­tra­tion is part of a wider story about wealth concen­tra­tion overall as a smaller and smaller number of people own more and more of our world. This concen­tra­tion of both wealth and indus­tries means that our poli­ti­cal outco­mes are incre­a­singly behol­den to the paro­chial inter­ests of the people and compa­nies with all the money.

That means that whene­ver a regu­la­tor asks a ques­tion with an obvi­ous, empi­ri­cal answer (“Are humans causing climate change?” or “Should we let compa­nies conduct commer­cial mass survei­llance?” or “Does soci­ety bene­fit from allo­wing network neutra­lity viola­ti­ons?”), the answer that comes out is only correct if that correct­ness meets with the appro­val of rich people and the indus­tries that made them so wealthy.

Rich people have always played an outsi­zed role in poli­tics and more so since the Supreme Court’s Citi­zens United deci­sion elimi­na­ted key controls over poli­ti­cal spen­ding. Wide­ning inequa­lity and wealth concen­tra­tion means that the very richest people are now a lot richer and can afford to spend a lot more money on poli­ti­cal projects than ever before. Think of the Koch brot­hers or George Soros or Bill Gates.

But the policy distor­ti­ons of rich indi­vi­du­als pale in compa­ri­son to the policy distor­ti­ons that concen­tra­ted indus­tries are capa­ble of. The compa­nies in highly concen­tra­ted indus­tries are much more profi­ta­ble than compa­nies in compe­ti­tive indus­tries — no compe­ti­tion means not having to reduce prices or improve quality to win custo­mers — leaving them with bigger capi­tal surplu­ses to spend on lobbying.

Concen­tra­ted indus­tries also find it easier to colla­bo­rate on policy objec­ti­ves than compe­ti­tive ones. When all the top execs from your industry can fit around a single boar­droom table, they often do. And when they do, they can forge a consen­sus posi­tion on regu­la­tion.

Rising through the ranks in a concen­tra­ted industry gene­rally means working at two or three of the big compa­nies. When there are only rela­ti­vely few compa­nies in a given industry, each company has a more ossi­fied execu­tive rank, leaving ambi­ti­ous execs with fewer paths to higher posi­ti­ons unless they are recrui­ted to a rival. This means that the top execs in concen­tra­ted indus­tries are likely to have been colle­a­gues at some point and soci­a­lize in the same circles — connec­ted through social ties or, say, serving as trus­tees for each others’ esta­tes. These tight social bonds foster a colle­gial, rather than compe­ti­tive, atti­tude.

Highly concen­tra­ted indus­tries also present a regu­la­tory conun­drum. When an industry is domi­na­ted by just four or five compa­nies, the only people who are likely to truly unders­tand the industry’s prac­ti­ces are its vete­ran execu­ti­ves. This means that top regu­la­tors are often former execs of the compa­nies they are suppo­sed to be regu­la­ting. These turns in govern­ment are often tacitly unders­tood to be leaves of absence from industry, with former employers welco­ming their erstw­hile watch­dogs back into their execu­tive ranks once their terms have expi­red.

All this is to say that the tight social bonds, small number of firms, and regu­la­tory capture of concen­tra­ted indus­tries give the compa­nies that comprise them the power to dictate many, if not all, of the regu­la­ti­ons that bind them.

This is incre­a­singly obvi­ous. Whet­her it’s payday lenders winning the right to prac­tice preda­tory lending or Apple winning the right to decide who can fix your phone or Google and Face­book winning the right to breach your private data without suffe­ring meaning­ful conse­quen­ces or victo­ries for pipe­line compa­nies or impu­nity for opioid manu­fac­tu­rers or massive tax subsi­dies for incre­dibly profi­ta­ble domi­nant busi­nes­ses, it’s incre­a­singly appa­rent that many of our offi­cial, evidence-based truth-seeking proces­ses are, in fact, aucti­ons for sale to the highest bidder.

It’s really impos­si­ble to overs­tate what a terrifying pros­pect this is. We live in an incre­dibly high-tech soci­ety, and none of us could acquire the exper­tise to evalu­ate every tech­no­lo­gi­cal propo­si­tion that stands between us and our unti­mely, horri­ble deaths. You might devote your life to acqui­ring the media lite­racy to distin­guish good scien­ti­fic jour­nals from corrupt pay-for-play looka­li­kes and the statis­ti­cal lite­racy to evalu­ate the quality of the analy­sis in the jour­nals as well as the micro­bi­o­logy and epide­mi­o­logy know­ledge to deter­mine whet­her you can trust claims about the safety of vacci­nes — but that would still leave you unqua­li­fied to judge whet­her the wiring in your home will give you a lethal shock and whet­her your car’s brakes’ soft­ware will cause them to fail unpre­dic­tably and whet­her the hygi­ene stan­dards at your butcher are suffi­ci­ent to keep you from dying after you finish your dinner.

In a world as complex as this one, we have to defer to autho­ri­ties, and we keep them honest by making those autho­ri­ties accoun­ta­ble to us and binding them with rules to prevent conflicts of inter­est. We can’t possibly acquire the exper­tise to adju­di­cate conflic­ting claims about the best way to make the world safe and pros­pe­rous, but we can deter­mine whet­her the adju­di­ca­tion process itself is trust­wor­thy.

Right now, it’s obvi­ously not.

The past 40 years of rising inequa­lity and industry concen­tra­tion, toget­her with incre­a­singly weak accoun­ta­bi­lity and trans­pa­rency for expert agen­cies, has crea­ted an incre­a­singly urgent sense of impen­ding doom, the sense that there are vast cons­pi­ra­cies afoot that operate with tacit offi­cial appro­val despite the like­li­hood they are working to better them­sel­ves by ruining the rest of us.

For exam­ple, it’s been deca­des since Exxon’s own scien­tists conclu­ded that its products would render the Earth unin­ha­bi­ta­ble by humans. And yet those deca­des were lost to us, in large part because Exxon lobbied govern­ments and sowed doubt about the dangers of its products and did so with the coope­ra­tion of many public offi­ci­als. When the survi­val of you and everyone you love is thre­a­te­ned by cons­pi­ra­cies, it’s not unre­a­so­na­ble to start ques­ti­o­ning the things you think you know in an attempt to deter­mine whet­her they, too, are the outcome of anot­her cons­pi­racy.

The collapse of the credi­bi­lity of our systems for divi­ning and uphol­ding truths has left us in a state of epis­te­mo­lo­gi­cal chaos. Once, most of us might have assu­med that the system was working and that our regu­la­ti­ons reflec­ted our best unders­tan­ding of the empi­ri­cal truths of the world as they were best unders­tood — now we have to find our own experts to help us sort the true from the false.

If you’re like me, you probably beli­eve that vacci­nes are safe, but you (like me) probably also can’t explain the micro­bi­o­logy or statis­tics. Few of us have the math skills to review the lite­ra­ture on vaccine safety and describe why their statis­ti­cal reaso­ning is sound. Like­wise, few of us can review the stats in the (now discre­di­ted) lite­ra­ture on opioid safety and explain how those stats were mani­pu­la­ted. Both vacci­nes and opioids were embra­ced by medi­cal autho­ri­ties, after all, and one is safe while the other could ruin your life. You’re left with a kind of incho­ate cons­te­lla­tion of rules of thumb about which experts you trust to fact-check contro­ver­sial claims and then to explain how all those respec­ta­ble doctors with their peer-revi­e­wed rese­arch on opioid safety were an aber­ra­tion and then how you know that the doctors writing about vaccine safety are not an aber­ra­tion.

I’m 100% certain that vacci­na­ting is safe and effec­tive, but I’m also at somet­hing of a loss to explain exac­tly, preci­sely,  why I beli­eve this, given all the corrup­tion I know about and the many times the stamp of certainty has turned out to be a paro­chial lie told to further enrich the super rich.

Fake news — cons­pi­racy theo­ries, racist ideo­lo­gies, scien­ti­fic deni­a­lism — has always been with us. What’s chan­ged today is not the mix of ideas in the public discourse but the popu­la­rity of the worst ideas in that mix. Cons­pi­racy and denial have skyroc­ke­ted in locks­tep with the growth of Big Inequa­lity, which has also trac­ked the rise of Big Tech and Big Pharma and Big Wrest­ling and Big Car and Big Movie Thea­ter and Big Everyt­hing Else.

No one can say for certain why this has happe­ned, but the two domi­nant camps are idea­lism (the belief that the people who argue for these cons­pi­ra­cies have gotten better at explai­ning them, maybe with the help of machine-lear­ning tools) or mate­ri­a­lism (the ideas have become more attrac­tive because of mate­rial condi­ti­ons in the world).

I’m a mate­ri­a­list. I’ve been expo­sed to the argu­ments of cons­pi­racy theo­rists all my life, and I have not expe­ri­en­ced any quali­ta­tive leap in the quality of those argu­ments.

The major diffe­rence is in the world, not the argu­ments. In a time where actual cons­pi­ra­cies are common­place, cons­pi­racy theo­ries acquire a ring of plau­si­bi­lity.

We have always had disa­gre­e­ments about what’s true, but today, we have a disa­gre­e­ment over how we know whet­her somet­hing is true. This is an epis­te­mo­lo­gi­cal crisis, not a crisis over belief. It’s a crisis over the credi­bi­lity of our truth-seeking exer­ci­ses, from scien­ti­fic jour­nals (in an era where the biggest jour­nal publis­hers have been caught produ­cing pay-to-play jour­nals for junk science) to regu­la­ti­ons (in an era where regu­la­tors are routi­nely cycling in and out of busi­ness) to educa­tion (in an era where univer­si­ties are depen­dent on corpo­rate dona­ti­ons to keep their lights on).

Targe­ting — survei­llance capi­ta­lism — makes it easier to find people who are under­going this epis­te­mo­lo­gi­cal crisis, but it doesn’t create the crisis. For that, you need to look to corrup­tion.

And, conve­ni­ently enough, it’s corrup­tion that allows survei­llance capi­ta­lism to grow by dismant­ling mono­poly protec­ti­ons, by permit­ting reck­less collec­tion and reten­tion of perso­nal data, by allo­wing ads to be targe­ted in secret, and by fore­clo­sing on the possi­bi­lity of going somew­here else where you might conti­nue to enjoy your friends without subjec­ting your­self to commer­cial survei­llance.

Image for post

Image for post

Tech is diffe­rent

I reject both itera­ti­ons of tech­no­lo­gi­cal excep­ti­o­na­lism. I reject the idea that tech is uniquely terri­ble and led by people who are gree­dier or worse than the leaders of other indus­tries, and I reject the idea that tech is so good — or so intrin­si­cally prone to concen­tra­tion — that it can’t be blamed for its present-day mono­po­lis­tic status.

I think tech is just anot­her industry, albeit one that grew up in the absence of real mono­poly cons­traints. It may have been first, but it isn’t the worst nor will it be the last.

But there’s one way in which I am a tech excep­ti­o­na­list. I beli­eve that online tools are the key to over­co­ming problems that are much more urgent than tech mono­po­li­za­tion: climate change, inequa­lity, misogyny, and discri­mi­na­tion on the basis of race, gender iden­tity, and other factors. The inter­net is how we will recruit people to fight those fights, and how we will coor­di­nate their labor. Tech is not a subs­ti­tute for demo­cra­tic accoun­ta­bi­lity, the rule of law, fair­ness, or stabi­lity — but it’s a means to achi­eve these things.

The hard problem of our species is coor­di­na­tion. Everyt­hing from climate change to social change to running a busi­ness to making a family work can be viewed as a collec­tive action problem.

The inter­net makes it easier than at any time before to find people who want to work on a project with you — hence the success of free and open-source soft­ware, crowd­fun­ding, and racist terror groups — and easier than ever to coor­di­nate the work you do.

The inter­net and the compu­ters we connect to it also possess an excep­ti­o­nal quality: gene­ral-purpo­se­ness. The inter­net is desig­ned to allow any two parties to commu­ni­cate any data, using any proto­col, without permis­sion from anyone else. The only produc­tion design we have for compu­ters is the gene­ral-purpose, “Turing complete” compu­ter that can run every program we can express in symbo­lic logic.

This means that every time some­one with a special commu­ni­ca­ti­ons need invests in infras­truc­ture and tech­ni­ques to make the inter­net faster, chea­per, and more robust, this bene­fit redounds to everyone else who is using the inter­net to commu­ni­cate. And this also means that every time some­one with a special compu­ting need invests to make compu­ters faster, chea­per, and more robust, every other compu­ting appli­ca­tion is a poten­tial bene­fi­ci­ary of this work.

For these reasons, every type of commu­ni­ca­tion is gradu­ally absor­bed into the inter­net, and every type of device — from airpla­nes to pace­ma­kers — even­tu­ally beco­mes a compu­ter in a fancy case.

While these consi­de­ra­ti­ons don’t preclude regu­la­ting networks and compu­ters, they do call for gravi­tas and caution when doing so because chan­ges to regu­la­tory frame­works could ripple out to have unin­ten­ded conse­quen­ces in many, many other domains.

The upshot of this is that our best hope of solving the big coor­di­na­tion problems — climate change, inequa­lity, etc. — is with free, fair, and open tech. Our best hope of keeping tech free, fair, and open is to exer­cise caution in how we regu­late tech and to attend closely to the ways in which inter­ven­ti­ons to solve one problem might create problems in other domains.

Owners­hip of facts

Big Tech has a funny rela­ti­ons­hip with infor­ma­tion. When you’re gene­ra­ting infor­ma­tion — anyt­hing from the loca­tion data stre­a­ming off your mobile device to the private messa­ges you send to friends on a social network — it claims the rights to make unli­mi­ted use of that data.

But when you have the auda­city to turn the tables — to use a tool that blocks ads or slurps your waiting upda­tes out of a social network and puts them in anot­her app that lets you set your own prio­ri­ties and sugges­ti­ons or crawls their system to allow you to start a rival busi­ness — they claim that you’re stea­ling from them.

The thing is, infor­ma­tion is a very bad fit for any kind of private property regime. Property rights are useful for esta­blis­hing markets that can lead to the effec­tive deve­lop­ment of fallow assets. These markets depend on clear titles to ensure that the things being bought and sold in them can, in fact, be bought and sold.

Infor­ma­tion rarely has such a clear title. Take phone numbers: There’s clearly somet­hing going wrong when Face­book slurps up milli­ons of users’ address books and uses the phone numbers it finds in them to plot out social graphs and fill in missing infor­ma­tion about other users.

But the phone numbers Face­book noncon­sen­su­ally acqui­res in this trans­ac­tion are not the “property” of the users they’re taken from nor do they belong to the people whose phones ring when you dial those numbers. The numbers are mere inte­gers, 10 digits in the U.S. and Canada, and they appear in milli­ons of places, inclu­ding somew­here deep in pi as well as nume­rous other contexts. Giving people owners­hip titles to inte­gers is an obvi­ously terri­ble idea.

Like­wise for the facts that Face­book and other commer­cial survei­llance opera­tors acquire about us, like that we are the chil­dren of our parents or the parents to our chil­dren or that we had a conver­sa­tion with some­one else or went to a public place. These data points can’t be property in the sense that your house or your shirt is your property because the title to them is intrin­si­cally muddy: Does your mom own the fact that she is your mother? Do you? Do both of you? What about your dad — does he own this fact too, or does he have to license the fact from you (or your mom or both of you) in order to use this fact? What about the hundreds or thou­sands of other people who know these facts?

If you go to a Black Lives Matter demons­tra­tion, do the other demons­tra­tors need your permis­sion to post their photos from the event? The online fights over when and how to post photos from demons­tra­ti­ons reveal a nuan­ced, complex issue that cannot be easily hand-waved away by giving one party a property right that everyone else in the mix has to respect.

The fact that infor­ma­tion isn’t a good fit with property and markets doesn’t mean that it’s not valu­a­ble. Babies aren’t property, but they’re inar­guably valu­a­ble. In fact, we have a whole set of rules just for babies as well as a subset of those rules that apply to humans more gene­rally. Some­one who argues that babies won’t be truly valu­a­ble until they can be bought and sold like loaves of bread would be instantly and right­fully condem­ned as a mons­ter.

It’s temp­ting to reach for the property hammer when Big Tech treats your infor­ma­tion like a nail — not least because Big Tech are such proli­fic abusers of property hammers when it comes to theirinfor­ma­tion. But this is a mistake. If we allow markets to dictate the use of our infor­ma­tion, then we’ll find that we’re sellers in a buyers’ market where the Big Tech mono­po­lies set a price for our data that is so low as to be insig­ni­fi­cant or, more likely, set at a nonne­go­ti­a­ble price of zero in a click-through agre­e­ment that you don’t have the oppor­tu­nity to modify.

Meanw­hile, esta­blis­hing property rights over infor­ma­tion will create insur­moun­ta­ble barri­ers to inde­pen­dent data proces­sing. Imagine that we require a license to be nego­ti­a­ted when a trans­la­ted docu­ment is compa­red with its origi­nal, somet­hing Google has done and conti­nues to do billi­ons of times to train its auto­ma­ted language trans­la­tion tools. Google can afford this, but inde­pen­dent third parties cannot. Google can staff a clea­ran­ces depart­ment to nego­ti­ate one-time payments to the likes of the EU (one of the major repo­si­to­ries of trans­la­ted docu­ments) while inde­pen­dent watch­dogs wanting to verify that the trans­la­ti­ons are well-prepa­red, or to root out bias in trans­la­ti­ons, will find them­sel­ves needing a staf­fed-up legal depart­ment and milli­ons for licen­ses before they can even get star­ted.

The same goes for things like search inde­xes of the web or photos of peoples’ houses, which have become conten­ti­ous thanks to Google’s Street View project. Whate­ver problems may exist with Google’s photo­grap­hing of street scenes, resol­ving them by letting people decide who can take pictu­res of the faca­des of their homes from a public street will surely create even worse ones. Think of how street photo­graphy is impor­tant for news­gat­he­ring — inclu­ding infor­mal news­gat­he­ring, like photo­grap­hing abuses of autho­rity — and how being able to docu­ment housing and street life are impor­tant for contes­ting eminent domain, advo­ca­ting for social aid, repor­ting plan­ning and zoning viola­ti­ons, docu­men­ting discri­mi­na­tory and unequal living condi­ti­ons, and more.

The owners­hip of facts is antit­he­ti­cal to many kinds of human progress. It’s hard to imagine a rule that limits Big Tech’s exploi­ta­tion of our collec­tive labors without inad­ver­tently banning people from gathe­ring data on online harass­ment or compi­ling inde­xes of chan­ges in language or simply inves­ti­ga­ting how the plat­forms are shaping our discourse — all of which require scra­ping data that other people have crea­ted and subjec­ting it to scru­tiny and analy­sis.

Persu­a­sion works… slowly

The plat­forms may over­sell their ability to persu­ade people, but obvi­ously, persu­a­sion works some­ti­mes. Whet­her it’s the private realm that LGBTQ people used to recruit allies and norma­lize sexual diver­sity or the deca­des­long project to convince people that markets are the only effi­ci­ent way to solve compli­ca­ted resource allo­ca­tion problems, it’s clear that our soci­e­tal atti­tu­des can change.

The project of shif­ting soci­e­tal atti­tu­des is a game of inches and years. For centu­ries, sven­ga­lis have purpor­ted to be able to acce­le­rate this process, but even the most brutal forms of propa­ganda have strug­gled to make perma­nent chan­ges. Joseph Goeb­bels was able to subject Germans to daily, manda­tory, hours­long radio broad­casts, to round up and torture and murder dissi­dents, and to seize full control over their chil­dren’s educa­tion while banning any lite­ra­ture, broad­casts, or films that did not comport with his world­view.

Yet, after 12 years of terror, once the war ended, Nazi ideo­logy was largely discre­di­ted in both East and West Germany, and a program of nati­o­nal truth and recon­ci­li­a­tion was put in its place. Racism and autho­ri­ta­ri­a­nism were never fully abolis­hed in Germany, but neit­her were the majo­rity of Germans irre­vo­cably convin­ced of Nazism — and the rise of racist autho­ri­ta­ri­a­nism in Germany today tells us that the libe­ral atti­tu­des that repla­ced Nazism were no more perma­nent than Nazism itself.

Racism and autho­ri­ta­ri­a­nism have also always been with us. Anyone who’s revi­e­wed the kind of messa­ges and argu­ments that racists put forward today would be hard-pres­sed to say that they have gotten better at presen­ting their ideas. The same pseu­dos­ci­ence, appe­als to fear, and circu­lar logic that racists presen­ted in the 1980s, when the cause of white supre­macy was on the wane, are to be found in the commu­ni­ca­ti­ons of leading white nati­o­na­lists today.

If racists haven’t gotten more convin­cing in the past decade, then how is it that more people were convin­ced to be openly racist at that time? I beli­eve that the answer lies in the mate­rial world, not the world of ideas. The ideas haven’t gotten more convin­cing, but people have become more afraid. Afraid that the state can’t be trus­ted to act as an honest broker in life-or-death deci­si­ons, from those regar­ding the manage­ment of the economy to the regu­la­tion of pain­ki­llers to the rules for hand­ling private infor­ma­tion. Afraid that the world has become a game of musi­cal chairs in which the chairs are being taken away at a never-before-seen rate. Afraid that justice for others will come at their expense. Mono­po­lism isn’t the cause of these fears, but the inequa­lity and mate­rial despe­ra­tion and policy malprac­tice that mono­po­lism contri­bu­tes to is a signi­fi­cant contri­bu­tor to these condi­ti­ons. Inequa­lity crea­tes the condi­ti­ons for both cons­pi­ra­cies and violent racist ideo­lo­gies, and then survei­llance capi­ta­lism lets oppor­tu­nists target the fear­ful and the cons­pi­racy-minded.

Image for post

Image for post

Paying won’t help

As the old saw goes, “If you’re not paying for the product, you’re the product.”

It’s a common­place belief today that the advent of free, ad-suppor­ted media was the origi­nal sin of survei­llance capi­ta­lism. The reaso­ning is that the compa­nies that char­ged for access couldn’t “compete with free” and so they were driven out of busi­ness. Their ad-suppor­ted compe­ti­tors, meanw­hile, decla­red open season on their users’ data in a bid to improve their ad targe­ting and make more money and then resor­ted to the most sensa­ti­o­na­list tactics to gene­rate clicks on those ads. If only we’d pay for media again, we’d have a better, more respon­si­ble, more sober discourse that would be better for demo­cracy.

But the degra­da­tion of news products long prece­des the advent of ad-suppor­ted online news. Long before news­pa­pers were online, lax anti­trust enfor­ce­ment had opened the door for unpre­ce­den­ted waves of conso­li­da­tion and roll-ups in news­ro­oms. Rival news­pa­pers were merged, repor­ters and ad sales staff were laid off, physi­cal plants were sold and leased back, leaving the compa­nies loaded up with debt through leve­ra­ged buyouts and subse­quent profit-taking by the new owners. In other words, it wasn’t merely shifts in the clas­si­fied adver­ti­sing market, which was long held to be the primary driver in the decline of the tradi­ti­o­nal news­room, that made news compa­nies unable to adapt to the inter­net — it was mono­po­lism.

Then, as news compa­nies did come online, the ad reve­nues they comman­ded drop­ped even as the number of inter­net users (and thus poten­tial online readers) incre­a­sed. That shift was a func­tion of conso­li­da­tion in the ad sales market, with Google and Face­book emer­ging as duopo­lists who made more money every year from adver­ti­sing while paying less and less of it to the publis­hers whose work the ads appe­a­red along­side. Mono­po­lism crea­ted a buyer’s market for ad inven­tory with Face­book and Google acting as gate­ke­e­pers.

Paid servi­ces conti­nue to exist along­side free ones, and often it is these paid servi­ces — anxi­ous to prevent people from bypas­sing their paywalls or sharing paid media with free­lo­a­ders — that exert the most control over their custo­mers. Apple’s iTunes and App Stores are paid servi­ces, but to maxi­mize their profi­ta­bi­lity, Apple has to lock its plat­forms so that third parties can’t make compa­ti­ble soft­ware without permis­sion. These locks allow the company to exer­cise both edito­rial control (enabling it to exclude contro­ver­sial poli­ti­cal mate­rial) and tech­no­lo­gi­cal control, inclu­ding control over who can repair the devi­ces it makes. If we’re worried that ad-suppor­ted products deprive people of their right to self-deter­mi­na­tion by using persu­a­sion tech­ni­ques to nudge their purchase deci­si­ons a few degrees in one direc­tion or the other, then the near-total control a single company holds over the deci­sion of who gets to sell you soft­ware, parts, and service for your iPhone should have us very worried indeed.

We shouldn’t just be concer­ned about payment and control: The idea that paying will improve discourse is also dange­rously wrong. The poor success rate of targe­ted adver­ti­sing means that the plat­forms have to incen­ti­vize you to “engage” with posts at extre­mely high levels to gene­rate enough page­vi­ews to safe­guard their profits. As discus­sed earlier, to incre­ase enga­ge­ment, plat­forms like Face­book use machine lear­ning to guess which messa­ges will be most inflam­ma­tory and make a point of shoving those into your eyeballs at every turn so that you will hate-click and argue with people.

Perhaps paying would fix this, the reaso­ning goes. If plat­forms could be econo­mi­cally viable even if you stop­ped clic­king on them once your inte­llec­tual and social curi­o­sity had been slaked, then they would have no reason to algo­rith­mi­cally enrage you to get more clicks out of you, right?

There may be somet­hing to that argu­ment, but it still igno­res the wider econo­mic and poli­ti­cal context of the plat­forms and the world that allo­wed them to grow so domi­nant.

Plat­forms are world-span­ning and all-encom­pas­sing because they are mono­po­lies, and they are mono­po­lies because we have gutted our most impor­tant and reli­a­ble anti-mono­poly rules. Anti­trust was neute­red as a key part of the project to make the wealthy wealt­hier, and that project has worked. The vast majo­rity of people on Earth have a nega­tive net worth, and even the dwind­ling middle class is in a preca­ri­ous state, under­sa­ved for reti­re­ment, unde­rin­su­red for medi­cal disas­ters, and under­se­cu­red against climate and tech­no­logy shocks.

In this wildly unequal world, paying doesn’t improve the discourse; it simply prices discourse out of the range of the majo­rity of people. Paying for the product is dandy, if you can afford it.

If you think today’s filter bubbles are a problem for our discourse, imagine what they’d be like if rich people inha­bi­ted free-flowing Athe­nian market­pla­ces of ideas where you have to pay for admis­sion while everyone else lives in online spaces that are subsi­di­zed by wealthy bene­fac­tors who relish the chance to esta­blish conver­sa­ti­o­nal spaces where the “house rules” forbid ques­ti­o­ning the status quo. That is, imagine if the rich sece­ded from Face­book, and then, instead of running ads that made money for share­hol­ders, Face­book became a billi­o­nai­re’s vanity project that also happe­ned to ensure that nobody talked about whet­her it was fair that only billi­o­nai­res could afford to hang out in the rari­fied corners of the inter­net.

Behind the idea of paying for access is a belief that free markets will address Big Tech’s dysfunc­tion. After all, to the extent that people have a view of survei­llance at all, it is gene­rally an unfa­vo­ra­ble one, and the longer and more thoroughly one is survei­lled, the less one tends to like it. Same goes for lock-in: If HP’s ink or Apple’s App Store were really obvi­ously fantas­tic, they wouldn’t need tech­ni­cal measu­res to prevent users from choo­sing a rival’s product. The only reason these tech­ni­cal coun­ter­me­a­su­res exist is that the compa­nies don’t beli­eve their custo­mers would volun­ta­rily submit to their terms, and they want to deprive them of the choice to take their busi­ness elsew­here.

Advo­ca­tes for markets laud their ability to aggre­gate the diffu­sed know­ledge of buyers and sellers across a whole soci­ety through demand signals, price signals, and so on. The argu­ment for survei­llance capi­ta­lism being a “rogue capi­ta­lism” is that machine-lear­ning-driven persu­a­sion tech­ni­ques distort deci­sion-making by consu­mers, leading to incor­rect signals — consu­mers don’t buy what they prefer, they buy what they’re tric­ked into prefer­ring. It follows that the mono­po­lis­tic prac­ti­ces of lock-in, which do far more to cons­train consu­mers’ free choi­ces, are even more of a “rogue capi­ta­lism.”

The profi­ta­bi­lity of any busi­ness is cons­trai­ned by the possi­bi­lity that its custo­mers will take their busi­ness elsew­here. Both survei­llance and lock-in are anti-featu­res that no custo­mer wants. But mono­po­lies can capture their regu­la­tors, crush their compe­ti­tors, insert them­sel­ves into their custo­mers’ lives, and corral people into “choo­sing” their servi­ces regard­less of whet­her they want them — it’s fine to be terri­ble when there is no alter­na­tive.

Ulti­ma­tely, survei­llance and lock-in are both simply busi­ness stra­te­gies that mono­po­lists can choose. Survei­llance compa­nies like Google are perfectly capa­ble of deploying lock-in tech­no­lo­gies — just look at the onerous Android licen­sing terms that require device-makers to bundle in Google’s suite of appli­ca­ti­ons. And lock-in compa­nies like Apple are perfectly capa­ble of subjec­ting their users to survei­llance if it means keeping the Chinese govern­ment happy and preser­ving ongoing access to Chinese markets. Mono­po­lies may be made up of good, ethi­cal people, but as insti­tu­ti­ons, they are not your friend — they will do whate­ver they can get away with to maxi­mize their profits, and the more mono­po­lis­tic they are, the more they can get away with.

Image for post

Image for post

An “ecology” moment for trust­bus­ting

If we’re going to break Big Tech’s death grip on our digi­tal lives, we’re going to have to fight mono­po­lies. That may sound pretty mundane and old-fashi­o­ned, somet­hing out of the New Deal era, while ending the use of auto­ma­ted beha­vi­o­ral modi­fi­ca­tion feels like the plot­line of a really cool cyber­punk novel.

Meanw­hile, brea­king up mono­po­lies is somet­hing we seem to have forgot­ten how to do. There is a bipar­ti­san, trans-Atlan­tic consen­sus that brea­king up compa­nies is a fool’s errand at best — liable to mire your fede­ral prose­cu­tors in deca­des of liti­ga­tion — and coun­ter­pro­duc­tive at worst, eroding the “consu­mer bene­fits” of large compa­nies with massive effi­ci­en­cies of scale.

But trust­bus­ters once strode the nation, bran­dis­hing law books, terro­ri­zing robber barons, and shat­te­ring the illu­sion of mono­po­li­es’ all-power­ful grip on our soci­ety. The trust­bus­ting era could not begin until we found the poli­ti­cal will — until the people convin­ced poli­ti­ci­ans they’d have their backs when they went up against the richest, most power­ful men in the world.

Could we find that poli­ti­cal will again?

Copy­right scho­lar James Boyle has descri­bed how the term “ecology” marked a turning point in envi­ron­men­tal acti­vism. Prior to the adop­tion of this term, people who wanted to preserve whale popu­la­ti­ons didn’t neces­sa­rily see them­sel­ves as figh­ting the same battle as people who wanted to protect the ozone layer or fight fresh­wa­ter pollu­tion or beat back smog or acid rain.

But the term “ecology” welded these dispa­rate causes toget­her into a single move­ment, and the members of this move­ment found soli­da­rity with one anot­her. The people who cared about smog signed peti­ti­ons circu­la­ted by the people who wanted to end whaling, and the anti-whalers marched along­side the people deman­ding action on acid rain. This uniting behind a common cause comple­tely chan­ged the dyna­mics of envi­ron­men­ta­lism, setting the stage for today’s climate acti­vism and the sense that preser­ving the habi­ta­bi­lity of the planet Earth is a shared duty among all people.

I beli­eve we are on the verge of a new “ecology” moment dedi­ca­ted to comba­ting mono­po­lies. After all, tech isn’t the only concen­tra­ted industry nor is it even the most concen­tra­ted of indus­tries.

You can find parti­sans for trust­bus­ting in every sector of the economy. Everyw­here you look, you can find people who’ve been wron­ged by mono­po­lists who’ve tras­hed their finan­ces, their health, their privacy, their educa­ti­ons, and the lives of people they love. Those people have the same cause as the people who want to break up Big Tech and the same enemies. When most of the world’s wealth is in the hands of a very few, it follows that nearly every large company will have over­lap­ping share­hol­ders.

That’s the good news: With a little bit of work and a little bit of coali­tion buil­ding, we have more than enough poli­ti­cal will to break up Big Tech and every other concen­tra­ted industry besi­des. First we take Face­book, then we take AT&T/Warner­Me­dia.

But here’s the bad news: Much of what we’re doing to tame Big Tech instead of brea­king up the big compa­nies also fore­clo­ses on the possi­bi­lity of brea­king them up later.

Big Tech’s concen­tra­tion currently means that their inac­tion on harass­ment, for exam­ple, leaves users with an impos­si­ble choice: absent them­sel­ves from public discourse by, say, quit­ting Twit­ter or endure vile, cons­tant abuse. Big Tech’s over-collec­tion and over-reten­tion of data results in horri­fic iden­tity theft. And their inac­tion on extre­mist recruit­ment means that white supre­ma­cists who lives­tream their shoo­ting rampa­ges can reach an audi­ence of billi­ons. The combi­na­tion of tech concen­tra­tion and media concen­tra­tion means that artists’ inco­mes are falling even as the reve­nue gene­ra­ted by their crea­ti­ons are incre­a­sing.

Yet govern­ments confron­ting all of these problems all inevi­tably converge on the same solu­tion: depu­tize the Big Tech giants to police their users and render them liable for their users’ bad acti­ons. The drive to force Big Tech to use auto­ma­ted filters to block everyt­hing from copy­right infrin­ge­ment to sex-traf­fic­king to violent extre­mism means that tech compa­nies will have to allo­cate hundreds of milli­ons to run these compli­ance systems.

These rules — the EU’s new Direc­tive on Copy­right, Austra­li­a’s new terror regu­la­tion, Ameri­ca’s FOSTA/SESTA sex-traf­fic­king law and more — are not just death warrants for small, upstart compe­ti­tors that might challenge Big Tech’s domi­nance but who lack the deep pockets of esta­blis­hed incum­bents to pay for all these auto­ma­ted systems. Worse still, these rules put a floor under how small we can hope to make Big Tech.

That’s because any move to break up Big Tech and cut it down to size will have to cope with the hard limit of not making these compa­nies so small that they can no longer afford to perform these duties — and it’s expen­sive to invest in those auto­ma­ted filters and outsource content mode­ra­tion. It’s alre­ady going to be hard to unwind these deeply concen­tra­ted, chime­ric behe­moths that have been welded toget­her in the pursuit of mono­poly profits. Doing so while simul­ta­ne­ously finding some way to fill the regu­la­tory void that will be left behind if these self-poli­cing rulers were forced to suddenly abdi­cate will be much, much harder.

Allo­wing the plat­forms to grow to their present size has given them a domi­nance that is nearly insur­moun­ta­ble — depu­ti­zing them with public duties to redress the patho­lo­gies crea­ted by their size makes it virtu­ally impos­si­ble to reduce that size. Lather, rinse, repeat: If the plat­forms don’t get smaller, they will get larger, and as they get larger, they will create more problems, which will give rise to more public duties for the compa­nies, which will make them bigger still.

We can work to fix the inter­net by brea­king up Big Tech and depri­ving them of mono­poly profits, or we can work to fix Big Tech by making them spend their mono­poly profits on gover­nance. But we can’t do both. We have to choose between a vibrant, open inter­net or a domi­na­ted, mono­po­li­zed inter­net comman­ded by Big Tech giants that we strug­gle with cons­tantly to get them to behave them­sel­ves.

Make Big Tech small again

Trust­bus­ting is hard. Brea­king big compa­nies into smaller ones is expen­sive and time-consu­ming. So time-consu­ming that by the time you’re done, the world has often moved on and rende­red years of liti­ga­tion irre­le­vant. From 1969 to 1982, the U.S. govern­ment pursued an anti­trust case against IBM over its domi­nance of main­frame compu­ting — but the case collap­sed in 1982 because main­fra­mes were being spee­dily repla­ced by PCs.

A future U.S. presi­dent could simply direct their attor­ney gene­ral to enforce the law as it was writ­ten.

It’s far easier to prevent concen­tra­tion than to fix it, and reins­ta­ting the tradi­ti­o­nal contours of U.S. anti­trust enfor­ce­ment will, at the very least, prevent further concen­tra­tion. That means bans on mergers between large compa­nies, on big compa­nies acqui­ring nascent compe­ti­tors, and on plat­form compa­nies compe­ting directly with the compa­nies that rely on the plat­forms.

These powers are all in the plain language of U.S. anti­trust laws, so in theory, a future U.S. presi­dent could simply direct their attor­ney gene­ral to enforce the law as it was writ­ten. But after deca­des of judi­cial “educa­tion” in the bene­fits of mono­po­lies, after multi­ple admi­nis­tra­ti­ons that have packed the fede­ral courts with life­time-appoin­ted mono­poly cheer­le­a­ders, it’s not clear that mere admi­nis­tra­tive action would do the trick.

If the courts frus­trate the Justice Depart­ment and the presi­dent, the next stop would be Congress, which could elimi­nate any doubt about how anti­trust law should be enfor­ced in the U.S. by passing new laws that boil down to saying, “Knock it off. We all know what the Sher­man Act says. Robert Bork was a deran­ged fanta­sist. For avoi­dance of doubt, fuck that guy.” In other words, the problem with mono­po­lies is mono­po­lism — the concen­tra­tion of power into too few hands, which erodes our right to self-deter­mi­na­tion. If there is a mono­poly, the law wants it gone, period. Sure, get rid of mono­po­lies that create “consu­mer harm” in the form of higher prices, but also, get rid of other mono­po­lies, too.

But this only prevents things from getting worse. To help them get better, we will have to build coali­ti­ons with other acti­vists in the anti-mono­poly ecology move­ment — a plura­lism move­ment or a self-deter­mi­na­tion move­ment — and target exis­ting mono­po­lies in every industry for brea­kup and struc­tu­ral sepa­ra­tion rules that prevent, for exam­ple, the giant eyewear mono­po­list Luxot­tica from domi­na­ting both the sale and the manu­fac­ture of spec­ta­cles.

In an impor­tant sense, it doesn’t matter which industry the brea­kups begin in. Once they start, share­hol­ders in every industry will start to eye their invest­ments in mono­po­lists skep­ti­cally. As trust­bus­ters ride into town and start making lives mise­ra­ble for mono­po­lists, the debate around every corpo­rate boar­dro­om’s table will shift. People within corpo­ra­ti­ons who’ve always felt uneasy about mono­po­lism will gain a power­ful new argu­ment to fend off their evil rivals in the corpo­rate hierar­chy: “If we do it my way, we make less money; if we do it your way, a judge will fine us billi­ons and expose us to ridi­cule and public disap­pro­ba­tion. So even though I get that it would be really cool to do that merger, lock out that compe­ti­tor, or buy that little company and kill it before it can thre­a­ten it, we really shouldn’t — not if we don’t want to get tied to the DOJ’s bumper and get drag­ged up and down Trust­bus­ter Road for the next 10 years.”

Image for post

Image for post

20 GOTO 10

Fixing Big Tech will require a lot of itera­tion. As cyber lawyer Lawrence Lessig wrote in his 1999 book, Code and Other Laws of Cybers­pace, our lives are regu­la­ted by four forces: law (what’s legal), code (what’s tech­no­lo­gi­cally possi­ble), norms (what’s soci­ally accep­ta­ble), and markets (what’s profi­ta­ble).

If you could wave a wand and get Congress to pass a law that re-fanged the Sher­man Act tomor­row, you could use the impen­ding brea­kups to convince venture capi­ta­lists to fund compe­ti­tors to Face­book, Google, Twit­ter, and Apple that would be waiting in the wings after they were cut down to size.

But getting Congress to act will require a massive norma­tive shift, a mass move­ment of people who care about mono­po­lies — and pulling them apart.

Getting people to care about mono­po­lies will take tech­no­lo­gi­cal inter­ven­ti­ons that help them to see what a world free from Big Tech might look like. Imagine if some­one could make a belo­ved (but unaut­ho­ri­zed) third-party Face­book or Twit­ter client that dampens the anxi­ety-produ­cing algo­rith­mic drum­beat and still lets you talk to your friends without being spied upon — somet­hing that made social media more soci­a­ble and less toxic. Now imagine that it gets shut down in a brutal legal battle. It’s always easier to convince people that somet­hing must be done to save a thing they love than it is to excite them about somet­hing that doesn’t even exist yet.

Neit­her tech nor law nor code nor markets are suffi­ci­ent to reform Big Tech. But a profi­ta­ble compe­ti­tor to Big Tech could bank­roll a legis­la­tive push; legal reform can embol­den a tools­mith to make a better tool; the tool can create custo­mers for a poten­tial busi­ness who value the bene­fits of the inter­net but want them deli­ve­red without Big Tech; and that busi­ness can get funded and divert some of its profits to legal reform. 20 GOTO 10 (or lather, rinse, repeat). Do it again, but this time, get farther! After all, this time you’re star­ting with weaker Big Tech adver­sa­ries, a cons­ti­tu­ency that unders­tands things can be better, Big Tech rivals who’ll help ensure their own future by bank­ro­lling reform, and code that other program­mers can build on to weaken Big Tech even further.

The survei­llance capi­ta­lism hypot­he­sis — that Big Tech’s products really work as well as they say they do and that’s why everyt­hing is so scre­wed up — is way too easy on survei­llance and even easier on capi­ta­lism. Compa­nies spy because they beli­eve their own BS, and compa­nies spy because govern­ments let them, and compa­nies spy because any advan­tage from spying is so short-lived and minor that they have to do more and more of it just to stay in place.

As to why things are so scre­wed up? Capi­ta­lism. Speci­fi­cally, the mono­po­lism that crea­tes inequa­lity and the inequa­lity that crea­tes mono­po­lism. It’s a form of capi­ta­lism that rewards soci­o­paths who destroy the real economy to inflate the bottom line, and they get away with it for the same reason compa­nies get away with spying: because our govern­ments are in thrall to both the ideo­logy that says mono­po­lies are actu­ally just fine and in thrall to the ideo­logy that says that in a mono­po­lis­tic world, you’d better not piss off the mono­po­lists.

Survei­llance doesn’t make capi­ta­lism rogue. Capi­ta­lism’s unchec­ked rule begets survei­llance. Survei­llance isn’t bad because it lets people mani­pu­late us. It’s bad because it crus­hes our ability to be our authen­tic selves — and because it lets the rich and power­ful figure out who might be thin­king of buil­ding guillo­ti­nes and what dirt they can use to discre­dit those embryo­nic guillo­tine-buil­ders before they can even get to the lumberyard.

Up and through

With all the problems of Big Tech, it’s temp­ting to imagine solving the problem by retur­ning to a world without tech at all. Resist that temp­ta­tion.

The only way out of our Big Tech problem is up and through. If our future is not reli­ant upon high tech, it will be because civi­li­za­tion has fallen. Big Tech wired toget­her a plane­tary, species-wide nervous system that, with the proper reforms and course correc­ti­ons, is capa­ble of seeing us through the exis­ten­tial challenge of our species and planet. Now it’s up to us to seize the means of compu­ta­tion, putting that elec­tro­nic nervous system under demo­cra­tic, accoun­ta­ble control.

I am, secretly, despite what I have said earlier, a tech excep­ti­o­na­list. Not in the sense of thin­king that tech should be given a free pass to mono­po­lize because it has “econo­mies of scale” or some other nebu­lous feature. I’m a tech excep­ti­o­na­list because I beli­eve that getting tech right matters and that getting it wrong will be an unmi­ti­ga­ted catas­trophe — and doing it right can give us the power to work toget­her to save our civi­li­za­tion, our species, and our planet.