Facebook
FacebookiStock

The Anti-Defamation League is sounding the alarm after its researchers found that social media companies are failing to remove white supremacist content, including conspiracy theories related to “white genocide, Jewish power and malicious grievances toward Jews and people of color.”

“This important research demonstrates that despite what they say are their best efforts, social media platforms continue to fail at detecting modern hate speech,” said ADL CEO Jonathan A. Greenblatt. “This is, in part, how the January 6th insurrection was organized in plain sight. There is no such thing as a polite white supremacist, and social media platforms must use their extensive resources to root out the extremist speech slipping past their current filters.”

The report made use of computational methods, including machine learning, to evaluate language on the neo-Nazi forum, Stormfront, extremists in an alt-right Twitter network and general users on Reddit.

According to the ADL, Libby Hemphill and her research team at the University of Michigan School of Information found six key ways to distinguish hate speech from commonplace speech:

White supremacists frequently referenced racial and ethnic groups using plural noun forms such as “Jews,” and “whites”; they appended “white” to otherwise unmarked terms, such as “power”; they used used less profanity than is common in social media; their posts were identical on extremist and mainstream platforms, indicating they don’t modify their speech for general audiences; their complaints and messages stayed consistent from year to year; and they described Jews in racial rather than religious terms.

The ADL said that the team’s findings “further support the need for platforms to remove violent extremist groups and content, including conspiracy theories like QAnon that fueled the January 6 insurrection. ADL experts recommend that platforms use the subtle but detectable differences in white supremacist speech to improve their automated identification methods.”

It called on social media platforms to enforce their own rules; use data from extremist sites to create automated detection models; look for specific linguistic markers; de-emphasize profanity in toxicity detection; and to train platform moderators and algorithms to recognize that white supremacists’ conversations are dangerous and hateful.

“Extremists intentionally seed disinformation and build communities online to normalize their messages and incite violence,” said Hemphill, the report’s author. “With all their resources and these revealing findings, platforms should do better. With all their power and influence, platforms must do better.”