Radical Signals: Looking for Signs of QAnon Radicalization on Twitter

A team of USC rese­ar­chers propose a better way to model cons­pi­racy theo­ries on Twit­ter. 

In 68 CE, the Roman empe­ror Nero commit­ted suicide. “Or did he?” asked the anci­ent cons­pi­racy theo­rists who were convin­ced Nero faked his death and was secretly plan­ning a return to power.

Cons­pi­racy theo­ries – expla­na­ti­ons for events that involve secret plots by sinis­ter and influ­en­tial groups, often for poli­ti­cal gain – are not a new human pheno­me­non. What is new, howe­ver, is the way they spread. In Nero’s day, word of mouth could only get a cons­pi­racy theory so far. Today, social media plat­forms can give cons­pi­racy theo­ries a far grea­ter reach.

Rese­arch scien­tist Luca Luceri and a team from USC Infor­ma­tion Scien­ces Insti­tute (ISI), a rese­arch insti­tute of the USC Viterbi School of Engi­ne­e­ring, set out to better unders­tand how cons­pi­racy theo­ries spread on social media, and what can be done to stop them. Speci­fi­cally, they looked at how Twit­ter users are radi­ca­li­zed within the QAnon cons­pi­racy theory.

Their resul­ting paper, Iden­tifying and Charac­te­ri­zing Beha­vi­o­ral Clas­ses of Radi­ca­li­za­tion within the QAnon Cons­pi­racy on Twit­ter, has been accep­ted in 2023 Inter­na­ti­o­nal AAAI Confe­rence on Web and Social Media (ICWSM).

Online Tweets Can Become IRL Acti­ons

“The moti­va­tion for looking at QAnon, ” said Luceri, “was that it’s a recent, high-profile exam­ple of how dange­rous a cons­pi­racy theory that origi­na­tes online and migra­tes into real life can be. Its suppor­ters were among vari­ous online promo­ters of the attack on the Capi­tol.”

Why did the team choose to look at Twit­ter? Luceri explai­ned, “That’s where the story migra­tes to mains­tream media. The moment these stories get trac­tion online in mains­tream media is the moment when they might have an impact on real life.”

The team used a data­set of elec­tion-rela­ted tweets collec­ted using Twit­ter’s stre­a­ming API service. They obser­ved over 240 million tweets from an 11-week period before the 2020 U.S. elec­tion. This data­set inclu­ded origi­nal tweets, replies, retwe­ets, and quote retwe­ets shared by over 10 million unique users.

Mode­ra­ting Q

In order to keep cons­pi­racy theo­ries like QAnon from spre­a­ding online, Twit­ter enac­ted mode­ra­tion stra­te­gies leading up to the 2020 U.S. Presi­den­tial elec­tion.

Luceri said, “The Twit­ter poli­cies were meant to purge Twit­tersp­here of QAnon content. The inter­ven­ti­ons were quite effec­tive in remo­ving or suspen­ding users that were gene­ra­ting a lot of origi­nal content. But we found evidence that QAnon content was still there, and in parti­cu­lar, less evident radi­ca­li­zed beha­vi­ors could escape this mode­ra­tion inter­ven­tion.”

So, the ISI team set out to find new methods for mode­ling radi­ca­li­za­tion on Twit­ter.

More Than Just Keywords

There is a body of prior rese­arch about QAnon radi­ca­li­za­tion that uses keywords – words that are frequently asso­ci­a­ted with the cons­pi­racy theory – to iden­tify its suppor­ters.

The ISI team hypot­he­si­zed that, while twee­ted keywords were a strong signal of QAnon support, they had to look at both “content-based” and “network-based” signals.

The first two signals the team studied were user profi­les and tweets, based on keywords and web sour­ces.

Luceri explai­ned, “With QAnon, we have a lot of infor­ma­tion about not only the keywords suppor­ters use, but also the web sour­ces and media outlets that they tend to share in their tweets. Some users clearly disclo­sed and decla­red their support for QAnon in their profi­les or their Twit­ter account descrip­ti­ons. And some­ti­mes they poin­ted to QAnon-rela­ted websi­tes, which is anot­her strong indi­ca­tor.”

Reading Between the Retwe­ets

The ISI team then went beyond content, and looked at “commu­nity-based” signals in the data­set.

A psycho­logy theory known as the “3N Model” has been used to describe radi­ca­li­za­tion of people in the real world. The team wanted to see if it played out the same way online. Luceri explai­ned, “The 3N model says that people in radi­ca­li­zed groups echo extreme ideo­lo­gies while they isolate indi­vi­du­als with oppo­sing ideas. So, we wanted to study and see if this could be veri­fied on Twit­ter.”

By looking at tweets, retwe­ets, follows, and mecha­nisms such as “I follow back, ” the team found strong evidence of the 3N theory in the QAnon Twit­ter commu­nity.

Addi­ti­o­nally, they looked at “lexi­cal simi­la­rity” (think: slang, abbre­vi­a­ti­ons, word pairings, punc­tu­a­tion and emojis), to quan­tify how lexi­cally simi­lar tweets within the QAnon commu­nity were.

What’s Next?

The rese­ar­chers showed that radi­ca­li­za­tion proces­ses should be mode­led across multi­ple dimen­si­ons; one single dimen­sion – simply looking at keywords, for exam­ple – cannot capture the complex facets of radi­ca­li­za­tion.

And because the methods used for this rese­arch were not speci­fic to Twit­ter or QAnon, Luceri said, “we plan to expand this study to other scena­rios and plat­forms, inclu­ding niche plat­forms where cons­pi­racy theo­ries origi­nate and are rein­for­ced before moving to mains­tream media. We are also consi­de­ring other plat­forms, like TikTok, YouTube, and Masto­don.”

Luceri will be presen­ting Iden­tifying and Charac­te­ri­zing Beha­vi­o­ral Clas­ses of Radi­ca­li­za­tion within the QAnon Cons­pi­racy on Twit­ter in the 2023 Inter­na­ti­o­nal AAAI Confe­rence on Web and Social Media (ICWSM), June 5 – 8, 2023 in Limas­sol, Cyprus.

This will be the 17th annual ICWSM, one of the premier confe­ren­ces for compu­ta­ti­o­nal social science and cutting-edge rese­arch rela­ted to online social media. The confe­rence is held in asso­ci­a­tion with the Asso­ci­a­tion for the Advan­ce­ment of Arti­fi­cial Inte­lli­gence (AAAI).

Photo Credit: smart­boy10/Getty Images