Far right groups are finding ways to circumvent hate content regulations on the popular social media platform TikTok by masking their hate speech behind catchy pop songs.

According to a new report, “Hatescape: An In-Depth Analysis of Extremism and Hate Speech on TikTok,” compiled using three months of research and 1,030 video samples, the site was found to be harbouring anti-Semitic and other hateful videos that were getting millions of views, many using the technique of masking hate behind a musical soundtrack.

The study found that many extremist TikTOk creators use music and video effects – the site’s “duet/stitch video creation” features or video effects – to get around hate regulations.

They also “leverage” the platforms functions to increase the visibility of their posts, including using the “algorithmic promotion of certain hashtags” to increase views and engagement.

“Extremist content creators regularly try to make it onto other users’ For You page [a viewer’s main video feed] and go viral,” the study found.

Evasion tactics to get around bans include banned users returning to the platform with nearly identical usernames, making strategic use of privacy functions and comment restrictions, using alternative hashtag spellings, and exploiting a profiles’ video grid layout to promote hatred.

They noted that hate and extremist content is removed by TikTok but not on a consistent basis, with only 18 percent of their sample of hateful videos removed by TikTok at the end of their data collection period.

The study concluded that “TikTok has a content moderation problem.”

“Extremist and even terrorist-related footage is easily discoverable on TikTok, despite contravening the platform’s terms of service. Content that praises and celebrates the actions of terrorists and extremists, or denies the existence of historically violent incidents such as genocides, is captured in our data, demonstrating how the platform operates as a new arena for violence-endorsing, hateful ideologies,” the study stated.

The report by the Institute for Strategic Dialogue sought to “start a conversation around how platforms like TikTok can improve their own practices to protect users from harm. Additionally, it underscores the clear need for independent oversight of such platforms, which currently leave users and the wider public open to significant risks to their health, security and rights.”