“We Are Not Safe:” When Platform Censorship Becomes Algorithmic Trauma

Oppres­sive social struc­tu­res are coded into our feeds to alie­nate and harm margi­na­li­zed iden­tity dissi­dents in Latin America. How can we fight back?

A sweet snap of lesbian love at a pride parade in Buenos Aires, Argen­tina is promptly purged from cybers­pace. A shirt­less child­hood picture of a Black female Brazi­lian scien­tist trig­gers a restric­tion to the link feature. An artis­tic post by an Argen­tine-based photo­grap­her show­ca­sing non-confor­ming female bodies quickly disap­pe­ars. If you are an iden­tity dissi­dent in Latin America, this kind of social media censors­hip is common­place. Since 2019, my Insta­gram feed has been floo­ded repe­a­tedly with posts repor­ting plat­form censors­hip. As a tech worker who navi­ga­tes the soft­ware deve­lop­ment world, and an acti­vist invol­ved in digi­tal secu­rity and digi­tal rights, I couldn’t help but notice a pattern to whose content was censo­red.

I asked my family members in the Brazi­lian state of Minas Gerais, who are conser­va­ti­ves and staunch Bolso­naro suppor­ters, if they’d ever encoun­te­red such messa­ges on their social media accounts, but they said no. Evidently, this was not happe­ning everyw­here. Who else was expe­ri­en­cing such censors­hip? I asked a close circle of my friends to take a scre­ens­hot every time they saw a “commu­nity guide­li­nes infrin­ged” notice pop up on their feeds. We are jour­na­lists, tech workers, teachers, artists, and acti­vists—­but our profes­si­ons seem secon­dary in this case. In the myriad of exis­ting iden­tity boxes, our group is “other”: we are mixed race, Black, Indi­ge­nous, fat, LGBTQIA+, trans­gen­der, and nonbi­nary.

“How will this kind of plat­form censors­hip impact our alre­ady vulne­ra­ble commu­ni­ties who use these plat­forms as commu­ni­ca­tion chan­nels and living archi­ves?”

We are all Latin Ameri­cans, and we are all iden­tity dissi­dents—we do not conform to the domi­nant gender, race, sexual orien­ta­tion, and conser­va­tive poli­ti­cal align­ment norms. Seeing these censors­hip messa­ges repe­a­tedly is trau­ma­tic. It makes us feel out of place and space—un­com­for­ta­ble, isola­ted, exhaus­ted, and numbed. I wonder: how will this kind of plat­form censors­hip impact our alre­ady vulne­ra­ble commu­ni­ties who use these plat­forms as commu­ni­ca­tion chan­nels and living archi­ves? What effect would this have on our collec­tive memory as margi­na­li­zed commu­ni­ties?

 

Codifying trauma

“Trauma, ” writes Cana­dian Gender Studies scho­lar Ann Cvet­ko­vich, can be unders­tood as a sign or symp­tom of a broa­der syste­mic problem, a moment in which abstract social systems can actu­ally be felt or sensed.” I unders­tand “algo­rith­mic trauma” as the harm inten­ti­o­nally inflic­ted on end users through the bypro­ducts of soft­ware design proces­ses lacking guiding prin­ci­ples to coun­ter­ba­lance racial, gender, and geo-poli­ti­cal hierar­chies. Digi­tal reality cries out for change as oppres­sive social struc­tu­res are coded into the current engi­ne­e­red featu­res, conti­nu­ously alie­na­ting iden­tity dissi­dents.

They suffer the syste­mic effects of being consis­tently censo­red and clas­si­fied as “unde­si­ra­ble.” These labels serve as output data to train algo­rithms on how desi­red bodies must look and behave to be worthy of exis­ting on social media. In addi­tion, being digi­tally expo­sed to noti­fi­ca­ti­ons with puzz­ling messa­ges such as: “Your content infrin­ges our commu­nity guide­li­nes” causes trauma.

Accor­ding to Cvet­ko­vich, life under capi­ta­lism may mani­fest “as much in the dull drama of every­day life as in cataclys­mic or punc­tual events.” The trauma of living under syste­mic oppres­sion can present both in quoti­dian and extreme situ­a­ti­ons, and it can be intense or numbing. In this way, algo­rith­mic trauma beco­mes a digi­tal exten­sion of the daily violence, racism, misogyny, and LGBTQIA+ phobia that iden­tity dissi­dents face daily in the outside world. U.S. psycho­lo­gist Marga­ret Crast­no­pol descri­bes these virtual trau­mas as “small, subtle psychic hurts.” For exam­ple, a user might think: “How can I violate the commu­nity’s guide­li­nes by simply posting pictu­res of my body? Am I not also a part of this commu­nity?”

Post by anaharff. The noti­fi­ca­tion on the image reads “Post remo­ved due to nudity or sexual acti­vity. Publis­hed on Septem­ber 6 at 15:26”. In the caption: “I’ve alre­ady lost count of two things: the number of times my censo­red photos have been remo­ved and the number of times I’ve repor­ted degra­ding content that the plat­form says “does not violate the guide­li­nes.” Yet, even after expo­sing this incon­sis­tency, these posts of mine were disap­pe­a­ring, one by one. I’m still here anyway, and these injus­ti­ces keep happe­ning. It’s not because I talk about it that they will stop, but it’s just that I get tired. I don’t want to sound like a broken record just talking about how unfair everyt­hing is. But some­ti­mes, it’s good to be remin­ded of it.”

 

Labor: who gets to design algo­rith­mic trauma?

But who deci­des what is accep­ta­ble or not on social media? From the soft­ware deve­lop­ment and product stra­tegy level, the U.S. domi­nance of tech compa­nies has been central to sustai­ning digi­tal colo­ni­a­lism, a pheno­me­non that has far-reaching conse­quen­ces for the Global South. For instance, most western social media plat­forms operate from Sili­con Valley, Cali­for­nia, in the U.S.A., yet accor­ding to data publis­hed in 2021, 88% of Insta­gram users reside outside of the U.S.A. Geo-poli­tics influ­en­ces the soft­ware deve­lop­ment and deci­sion-making proces­ses, aggra­va­ting the issue of unequal expo­sure in digi­tal spaces.

Further­more, tech compa­ni­es’ work­for­ces do not repre­sent their audi­en­ce’s diver­sity. For exam­ple, Face­book and Twit­ter employees consist of Bay Area, Cali­for­nia resi­dents glowing in the paci­fic lights—­di­gi­tal nomads untet­he­red to the lands they gentrify. Accor­ding to the Diver­sity Update report, Face­book’s U.S.-based employees are 39% white and 45% Asian-Ameri­can. Until a few years ago, Face­book grou­ped Hispa­nic and Black people toget­her when divul­ging employee data, and it is unclear if they grou­ped non-Hispa­nic Lati­nos in that data—or if they even care to know the diffe­rence.

South Afri­can soci­o­lo­gist Michael Kwet defi­nes Sili­con Valley as an impe­rial force. Made-in-Cali­for­nia tech­no­logy affects iden­tity dissi­dents who remain trap­ped as third-party obser­vers in the looking glass of their own lives. In Deco­lo­ni­zando Valo­res (Deco­lo­ni­zing Values), Brazi­lian philo­sop­her Thiago Teixeira talks about the “colo­nial mirror, ” a destruc­tive compo­si­tion promo­ting hate for the “other.” In this case, “other” refers to anyone not white, male, cis-hete­ro­se­xual, or loca­ted in terri­to­ries that are econo­mi­cally privi­le­ged. Face­book’s recrui­ting data and the appli­ca­tion of its commu­nity guide­li­nes seem to reflect the colo­nial mirror, dicta­ting the iden­tity of who gets to define the code and who suffers algo­rith­mic trauma.

Post by noti­cia.preta. The text in the video says: “Insta­gram dele­tes post of racism report made by Noti­cia Preta”. The caption reads: “We were the first news­pa­per to report the case of racism suffe­red by Matheus Ribeiro, in Leblon in Rio de Janeiro and Insta­gram has just dele­ted our post without any expla­na­tion. We repor­ted it Sunday night and less than 24 hours later when the inter­na­ti­o­nal media and other portals publis­hed it, they dele­ted ours. I let you answer ‘Why is that?’”

Tech­no­logy profes­si­o­nals from the Global South have seen a rise in job offers from U.S. and Euro­pean compa­nies in the last years, enfor­cing labor inequa­li­ties. As a result, nati­o­nal indus­tries are overw­hel­med by foreign opti­ons and can’t deve­lop locally-cente­red inno­va­ti­ons. Accor­ding to an inter­na­ti­o­nal nonpro­fit jour­na­lism orga­ni­za­tion, Rest of the World, U.S. compa­nies “pillage tech talent” in Latin America and Africa, thereby under­mi­ning the local ende­a­vors that can’t afford to offer higher sala­ries. This stra­tegy serves the Global North market, getting a cheap but highly quali­fied labor force. Howe­ver, workers from the South don’t fill posi­ti­ons of stra­te­gic impor­tance. Instead, they remain as scala­ble code writers and problem solvers.

In para­llel, nume­rous Latin Ameri­can acti­vists and scho­lars tackle how algo­rithms exacer­bate unequal repre­sen­ta­tion and expo­sure online. For exam­ple, a coali­tion of civil soci­ety orga­ni­za­ti­ons across the conti­nent publis­hed Stan­dards for demo­cra­tic regu­la­tion of large plat­forms, a regi­o­nal pers­pec­tive sugges­ting models for co-regu­la­tion and policy recom­men­da­ti­ons. They focus on trans­pa­rency, terms and condi­ti­ons, content remo­val, right to defense and appeal, and accoun­ta­bi­lity. This collec­ti­vely sour­ced report acknow­led­ges these compa­ni­es’ power over the flow of infor­ma­tion on the Inter­net and their role as gate­ke­e­pers.

Coding Rights, a Brazi­lian rese­arch orga­ni­za­tion, shared better path­ways for Deco­lo­ni­zing AI: A trans­fe­mi­nist appro­ach to data and social justice. They foster an online social envi­ron­ment free of online gender violence, allo­wing iden­tity dissi­dents to express them­sel­ves openly. Simi­larly, Dere­chos Digi­ta­les, a digi­tal rights orga­ni­za­tion in Latin America, produ­ced the 2021 Hate Speech in Latin America report, analy­zing regu­la­tory trends in the region and risks to free­dom of expres­sion. They recom­mend—w­hat to the neoli­be­ral plat­forms must sound almost radi­cal—t­hat content publis­hed on social networks and inter­net plat­forms must conform to global human rights stan­dards. This should apply not only to what is allo­wed or not but also to trans­pa­rency, mini­mum guaran­tees of process, right to appeal, and infor­ma­tion to the end user.

A further report, Content Remo­val: inequa­lity and exclu­sion from digi­tal civic space, by Arti­cle 19 Mexico, an inde­pen­dent orga­ni­za­tion, high­lights a few essen­tial points to unders­tan­ding algo­rith­mic trauma on the iden­tity dissi­dent commu­ni­ties. The authors view content remo­val as a form of violence enac­ted on the vulne­ra­ble. Conse­quently, it “contri­bu­tes to gene­ra­ting a climate of exclu­sion, censors­hip, self-censors­hip and social apathy in the digi­tal envi­ron­ment.” The report inclu­des a survey in which parti­ci­pants describe sensa­ti­ons of anxi­ety and vulne­ra­bi­lity, such as: “I limit myself to publis­hing certain things or images because (…) I don’t have thick skin and [violence] affects me.” Anot­her person writes: “We reach spaces [the social networks] that are not so safe. So I have censo­red myself so as not to expose myself to the remo­val of content.”

“Unsur­pri­singly, most of the attempts to challenge the status quo come from the margins and not from posi­ti­ons of power.”

 

Commu­nity design: from trauma to healing

In an industry where people desig­ning the harm are almost always not those who bear the brunt of it, we despe­ra­tely need a social justice pers­pec­tive to lead soft­ware deve­lop­ment and rede­sign our online expe­ri­en­ces. Yet, unsur­pri­singly, most of the attempts to challenge the status quo come from the margins and not from posi­ti­ons of power.

The collec­ti­vely envi­si­o­ned Design Justice frame­work puts forward prin­ci­ples to rethink design proces­ses and to center people margi­na­li­zed by design and its users. The Design Justice Network, initi­a­ted in Detroit, U.S.A. by 30 desig­ners and commu­nity orga­ni­zers enga­ged in social justice, draws on libe­ra­tory peda­gogy prac­ti­ces and Black Femi­nist Theory.

It propo­ses to measure design’s impact by asking: “Who parti­ci­pa­ted in the design process? Who bene­fi­ted from the design? And who was harmed by design?” These prin­ci­ples guided the deve­lop­ment of the social media initi­a­tive Lips Social, foun­ded by U.S. social algo­rithm rese­ar­cher Annie Brown. The plat­form origi­na­ted as a response to commu­ni­ties of sex-posi­tive artists and sex workers fed up with Insta­gram’s censors­hip and online harass­ment. Once on the app, a message greets us, asser­ting the user’s—and not an algo­rithm’s—­con­trol of what we would prefer to see in the news feed, and asking us to choose between a set of prede­ter­mi­ned cate­go­ries of content such as #selflove or #photo­graphy. This seemingly simple design feature reflects the deep respect and unders­tan­ding of the commu­nity it serves. Adding a trig­ger warning label also miti­ga­tes what can cons­ti­tute trau­ma­tic content for the user. Through design and code, Lips Social is helping its commu­nity heal, connect, and econo­mi­cally sustain itself.

Post by mari­a­ri­beiro_photo. Text on the image: “Repos­ted” Caption: “I don’t even have enough energy to write about it. Help me to not have my account dele­ted by inter­ac­ting with this post and somet­hing wonder­ful will happen in your life (or not)…”

Reima­gi­ning can be a power­ful design exer­cise to coun­ter­ba­lance hege­mo­nic tech­no­lo­gies. In an effort to shape more inclu­sive expe­ri­en­ces, Coding Rights deve­lo­ped The Oracle for Trans­fe­mi­nist Tech­no­lo­gies, a specu­la­tive tool using divi­na­tion cards to collec­ti­vely “envi­sion and share ideas for trans­fe­mi­nist tech­no­lo­gies from the future” and rede­sign exis­ting products. The cards invite players to apply values such as soli­da­rity, resi­li­ence, or hori­zon­ta­lity, while simul­ta­ne­ously provo­king reflec­tion on their posi­ti­o­na­lity. Given a parti­cu­lar situ­a­tion as amplifying the narra­ti­ves of women and queer people, players try to rethink and reclaim porno­graphy or defend the right to anony­mity online.

Anot­her exam­ple is the color-picker tool by Safiya Umoja Noble, an inter­net studies scho­lar from the U.S.A., who crea­ted “The Imagine Engine” as a way of sear­ching for infor­ma­tion online. This color-picker tool calls atten­tion to its finda­bi­lity by focu­sing on nuan­ced shades of infor­ma­tion, making it easier to iden­tify the limits between news, enter­tain­ment, porno­graphy, and acade­mic scho­lars­hip.

“We must acknow­ledge that algo­rith­mic trauma is a conse­quence of soft­ware deve­lop­ment proces­ses and busi­ness stra­te­gies under­pin­ned by colo­nial prac­ti­ces of extrac­ti­vism and exclu­sion.”

Seve­ral alter­na­ti­ves to miti­ga­ting the harms alre­ady repro­du­ced in systems are surfa­cing. Algo­rith­mic Justice League is a U.S.-based orga­ni­za­tion combi­ning art and rese­arch to illu­mi­nate the social impli­ca­ti­ons of arti­fi­cial inte­lli­gence. In their recent report, they spot­ted a trend in Bug Bounty Programs BBP, events spon­so­red and orga­ni­zed by compa­nies to mone­ta­rily compen­sate indi­vi­du­als repor­ting bugs and surfa­cing soft­ware vulne­ra­bi­li­ties, some of which may have caused algo­rith­mic harm. The AJL provi­ded a set of design levers and recom­men­da­ti­ons for how to shape BBPs for their disco­very and reduc­tion.

A few concerns also arise as this stra­tegy might shift these plat­forms’ atten­tion and respon­si­bi­li­ties to the secu­rity commu­nity. Suddenly, it puts the burden of solving the coded oppres­sion on people who have been syste­ma­ti­cally exclu­ded from desig­ning tech­no­logy and are often targe­ted by it. Howe­ver, to engen­der a struc­tu­ral change, we must acknow­ledge that algo­rith­mic trauma is a conse­quence of soft­ware deve­lop­ment proces­ses and busi­ness stra­te­gies under­pin­ned by colo­nial prac­ti­ces of extrac­ti­vism and exclu­sion. These stra­te­gies inhi­bit the deve­lop­ment of auto­no­mous and collec­tive solu­ti­ons from and for the Global South.

 

Unio­ni­zing as a path­way to collec­tive memory

In the digi­tal realm, colo­ni­a­lism is perpe­tu­a­ted through propri­e­tary soft­ware prac­ti­ces, data extrac­ti­vism, corpo­rate clouds, mono­po­li­zed Inter­net and data storage infras­truc­ture, and AI used for survei­llance and control. All of this is imple­men­ted through content mode­ra­tion, an auto­ma­ted process of deci­ding on what and who can parti­ci­pate in the digi­tal space. Social media compa­nies claim on paper that their poli­cies guaran­tee the preser­va­tion of univer­sal human rights; howe­ver, algo­rith­mic survei­llance and trauma are often driven by busi­ness prio­ri­ties or poli­ti­cal pres­su­res. As a result, mode­ra­tion poli­cies frequently target the view­points of margi­na­li­zed commu­ni­ties, at risk of over-enfor­ce­ment. Notably, within the Latin-Ameri­can context, mode­ra­tors’ lack of in-depth know­ledge of the local langua­ges and socio-poli­ti­cal milieu makes social media mode­ra­tion subject to overw­hel­mingly biased discre­ti­o­nary deci­si­ons. In addi­tion, anti­ci­pa­ting certain mode­ra­tion prac­ti­ces, commu­ni­ties shift their self-repre­sen­ta­tion, impac­ting how they will be remem­be­red in the future.

“Anti­ci­pa­ting certain mode­ra­tion prac­ti­ces, commu­ni­ties shift their self-repre­sen­ta­tion, impac­ting how they will be remem­be­red in the future.”

Insta­gram feeds, Twit­ter thre­ads, and Face­book posts become data archi­ves and collec­tive memo­ries. These corpo­ra­ti­ons are beco­ming the record keepers of each commu­nity, and of indi­vi­dual memory itself, while there is still a total lack of trans­pa­rency regar­ding content mode­ra­tion poli­cies.

Using open source tech­no­logy is a possi­ble response, contri­bu­ting to design efforts to archive and safe­guard content and history, and helping preserve context, narra­tive, and owners­hip. Criti­cally reflec­ting on the idea of open source, we must ensure data collec­tion trans­pa­rency for future accoun­ta­bi­lity. We despe­ra­tely need decen­tra­li­zed, Global South deve­lo­ped, colla­bo­ra­ti­vely owned, and main­tai­ned digi­tal products outside the hege­mo­nic plat­forms. This way, users can get grea­ter auto­nomy to control what they want—and, more impor­tantly, what they don’t want—to see and disco­ver. They can also indi­cate what they would like to be remem­be­red by, as well as how they prefer to have their data used in future rese­arch. More­o­ver, these design featu­res mini­mize the risk of inten­ti­o­nally inflic­ting algo­rith­mic trauma since this propo­sed soft­ware design process would be trauma-infor­med, cente­ring on iden­tity dissi­dents’ expe­ri­en­ces.

“Desig­ning algo­rithms and online expe­ri­en­ces based on respect, and cente­ring on human rights and care could foster a less trau­ma­tic envi­ron­ment for iden­tity dissi­dents and, in doing so, improve the expe­ri­ence for all.”

The empha­sis on the impor­tance and input of labor in preven­ting the design of algo­rith­mic trauma is essen­tial to char­ting the way forward. There must be a shift in power in the tech­no­logy industry to invite new proces­ses that put the workers affec­ted by these tech­no­lo­gies to the fore­front. There have been a few–­mostly unsuc­cess­ful–at­tempts at unio­ni­zing by tech employees in Google, Amazon, Pinte­rest, and Sales­force in North America and Western Europe. Although the tech worker unions in Latin America haven’t yet raised these techno-soci­e­tal issues, the gig economy work­for­ce—­vul­ne­ra­ble to unet­hi­cal and explo­ra­tory tech poli­cies and suffe­ring from algo­rith­mic trau­ma—­has been orga­ni­zing and protes­ting, deman­ding better work condi­ti­ons be desig­ned into these systems. Exami­ning these propo­sals shows that better expe­ri­en­ces are possi­ble, and that the process directly informs the product’s impact. Howe­ver, its clear limi­ta­ti­ons are the industry’s will, the tech­ni­cal scale for decen­tra­li­zed products, and finan­cial sustai­na­bi­lity—an impos­si­bi­lity given the mono­poly of current tech compa­nies.

The weight of respon­si­bi­lity could be direc­ted to the workers buil­ding these harm­ful tech­no­lo­gies. Once unio­ni­zed, they could bring these uncom­for­ta­ble conver­sa­ti­ons about harm and trauma directly into the deve­lop­ment war room to the deci­sion-making mana­gers. The soft­ware deve­lop­ment proces­ses, leaders­hip teams, and stra­te­gists in the digi­tal economy plat­forms must embrace the need for respon­si­ble algo­rithm buil­ding. Other­wise, it beco­mes an endless loop of coding inequa­li­ties perpe­tu­a­ting digi­tal colo­ni­a­lism at tech team meetings world­wide. Desig­ning algo­rithms and online expe­ri­en­ces based on respect, and cente­ring on human rights and care could foster a less trau­ma­tic envi­ron­ment for iden­tity dissi­dents and, in doing so, improve the expe­ri­ence for all.

 

Isabe­lla Barroso (she/her)) is a jour­na­list, rese­ar­cher, tech­no­lo­gist, and digi­tal secu­rity instruc­tor for acti­vists. She rese­ar­ches the inter­sec­tion of soci­ety, tech­no­logy, and dissi­dent commu­ni­ties. As the crea­tor of the Guar­di­ans of the Resis­tance project, her current focus lies in the impli­ca­ti­ons of content mode­ra­tion poli­cies of social media plat­forms on the memory of dissi­dent commu­ni­ties in Abya Yala and the impact of this in on their coun­ter-archi­ving prac­ti­ces. Her ongoing rese­arch project has crea­ted Guar­diãs da Resistên­cia, a podcast in Portu­guese that wants to connect and share the know­ledge of those who weave and preserve dissi­dent memo­ries in the conti­nent.



This text was produ­ced as part of the Coding Resis­tance Fellows­hip.