
CyberWell’s 2025 Annual Report shows online antisemitism is becoming increasingly sophisticated and event-driven, even as platform enforcement improves unevenly. While the average removal rate of reported antisemitic content across major platforms rose modestly from 50 percent in 2024 to 52.53 percent in 2025, individual platform outcomes varied sharply, leaving gaps that allow harmful content to spread widely.
CyberWell, a nonprofit Trusted Partner of Meta (Facebook, Instagram and Threads), TikTok and YouTube, monitored content across Meta, TikTok, X and YouTube, identifying three dominant narratives shaping online discourse in 2025: classic Jewish world domination conspiracies; scapegoating of Jews for global crises and violent events; and Conspiratorial Self-Victimization (CSV), a growing online narrative in which Jews are accused of staging violent attacks against themselves to gain sympathy, political advantage or shape public opinion. While CyberWell identified Jewish world domination tropes consistently since 2022, scapegoating and CSV narratives reached the top three in prevalence for the first time in 2025 and were found repeatedly after every significant global event that CyberWell monitored throughout the year.
Removal rates differed dramatically by narrative and platform. Classic conspiratorial content alleging inflated Jewish power and manipulation was removed 59 percent of the time upon escalation, while scapegoating narratives, e.g. blaming Jews for things that go wrong, world events, and natural disasters, were removed at roughly 50 percent, often only when overlapping older, explicitly defined antisemitic tropes. This is likely because this narrative is not recognized by any platform as policy violating with the partial exception of TikTok. CSV content was removed just 37 percent of the time across platforms, with TikTok standing out as the only platform that consistently enforced this narrative.
“Antisemitism is the attempt to erase the humanity of Jewish people through the spread of hate and false information-the current shifts in online antisemitism reflect new developments in conspiratorial online antisemitism meant to deny Jews of victimhood and ultimately humanity," said CyberWell Founder and CEO Tal-Or Cohen Montemayor. “Event-driven narratives are increasingly anchoring antisemitic content to real-world attacks, scapegoating Jewish communities for the wave of violence that has affected their communities worldwide in the last year. Leaders and allies in the fight against antisemitism must recognize this current change in online Jew-hatred, explicitly name it as antisemitism, and call it out to ensure there are decisive interventions to prevent additional calls to violence sparked by these narratives."
Platform-level enforcement shows extreme variation. TikTok achieved the highest removal rate, rising from 65.1 percent in 2024 to 88.81 percent in 2025, reflecting its comprehensive policies on denial of violent events and mocking victims of violent events, while X was the most active platform for antisemitic content with the lowest removal rate.
Meta’s removal rate increased from 49.5 percent in 2024 to 57.31 percent in 2025, crossing the threshold of removing the majority of content for the first time since CyberWell began monitoring in 2022. YouTube nearly doubled its enforcement, rising from 17.5 percent to 34.17 percent, likely attributed to its work with CyberWell, credentialling the organization as a priority flagger in the fourth quarter of 2025.
“YouTube’s improved removal rate demonstrates the impact of partnering with an organization like CyberWell that combines deep expertise in antisemitism with a rooted understanding of platform policies," added Cohen Montemayor. “By integrating contextual knowledge with technology that supports real-time monitoring and flagging, platforms can enforce guidelines more effectively. This model shows why collaboration with specialized stakeholders is essential for addressing complex, evolving forms of online hate."
By contrast, X experienced a sharp decline in enforcement, with removal rates falling from 54.2 percent in 2024 to 29.46 percent in 2025. Rather than removing content, X increasingly relied heavily on visibility-limitation measures, often only implemented after antisemitic posts already gained significant engagement and allowing antisemitic content to remain accessible and continue circulating widely despite restrictions.
Holocaust denial and distortion content declined in proportion from 11 percent in 2024 to under 8 percent in 2025 and saw improved moderation. Removal rates for this category reached 93.75 percent on TikTok, 77.36 percent on Meta and 58.33 percent on YouTube. Analysts at CyberWell attribute the decline in prevalence to both stronger enforcement and a shift toward evasive contemporary antisemitic symbols, dog-whistles and emojis (“algo-speak"). X was an exception, where enforcement against Holocaust denial and distortion declined sharply and even posts denying well document facts often remained available.
CyberWell stated that its findings "also highlight the growing role of AI-generated content and coded language in the spread of antisemitism. Automated moderation systems struggled to detect implicit antisemitic narratives embedded in memes, short-form videos, animations, emojis and symbolic imagery. As enforcement improves against explicit hate speech, antisemitic actors increasingly rely on these evasive formats to bypass detection and reach broader audiences. TikTok in particular, as a primarily visual platform, became the dominant site for AI-generated antisemitic posts."
"Concerningly, the report identifies a reinforcing cycle in which real-world attacks trigger Conspiratorial Self-Victimization and scapegoating narratives (i.e. blaming Jews for tragedy and even antisemitic terror acts), which coexist with glorification or justification of further violence. The persistence of these event-driven narratives highlights the limits of current moderation frameworks and the need for all digital platforms to update policies to address emerging forms of online antisemitism. CyberWell warns that without clear policy definitions and updated enforcement frameworks addressing these contemporary narratives, platforms will continue to lag behind the evolving nature of online antisemitism," the report stated.
