Facebook
FacebookiStock

A new study authored by Dr. Gabriel Weimann, a professor of communication at University of Haifa and web intelligence analyst Ari Ben-Am has revealed the existence of a new coded language on social media being used by anti-Semitic groups to fly under the radar of artificial intelligence-designed algorithms.


This sophisticated language, which is used for both propaganda and recruitment purposes, is hidden in plain sight and encourages violent incidents like the Christchurch shooting in New Zealand or the January capitol riots.


The study explores the emergence of this new language, its characteristics, transmission and usage and its findings are intended to serve both law enforcement and private sector hi-tech organizations in their quest to combat hate speech online. Given the popularity of the platform, the study focused primarily on images and messages disseminated on Facebook.


Posts used for the analysis were identified qualitatively after scanning hundreds of relevant groups and pages, eventually reaching a smaller sample of tens of pages (with a minimum of 500 followers or likes) as an initial seed group.


Much of this coded language the study found was shown to rely on the use of “dog-whistles,” a coded message communicated through words or phrases commonly understood by a particular group of people, but not by others.


The method could be as simple as swapping one word for another such as far-right users calling Jews “Skypes,” African-Americans as “Googles” and Latinos as “Yahoos.”

Yet, some tactics are more intricate and dabble in numerology where numbers stand for people or concepts. The number 88, for example, stands for “Heil Hitler,” since H is the eighth letter in the alphabet. Finally, the language also employed visual cues often by manipulating already popular memes found in pop culture. This method is by far the most sophisticated one as an algorithm’s ability to detect hate speech within an image is still limited.


While Weimann is currently working on a comprehensive glossary of terms chronicling instances of this coded language and what it means, he hopes this study will encourage law enforcement and hi-tech companies to be more vigilant and aware that their algorithms are not the be-all and end-all of ending hate speech.


“It's clear that security, counter-terrorism, and government agencies, as well as social media platforms, are doing much to crack down on abuse,” Weimann said. “But we need to educate the operators of these companies that run social media platforms to report these violations and also teach their users how to spot them. A human eye is still much more savvy than a computer-generated algorithm.”


So successful is this new language based on numbers, acronyms, and hidden images, that it has driven the formation of alternative social media sites such as Gab, Bitchute, and imageboards such as 4Chan, 8Chan, and Neinchan.


“While this is purely an academic study, it has real-world implications,” Weimann added. “Being Internet savvy and understanding what you're seeing online is important. We need to learn what they do, slow them down, and reduce their activity and efficiency. Words kill even if in a new language.”