Instagram (illustrative)
Instagram (illustrative)iStock

Meta's lack of content moderation policies risks turning Instagram into a breeding ground for antisemitism, white supremacy and terrorist propaganda, according to a new report released today by the ADL (Anti-Defamation League) Center on Extremism, in partnership with ADL affiliate, JLens.

Ahead of an upcoming Meta shareholder vote, the report, "How Meta's Content Moderation Changes Risk Turning Instagram into a Hub for Hate," reveals that Instagram removed just seven percent of hateful and extremist content reported by researchers, demonstrating a systemic failure to protect users on one of the world's most widely used social media platforms.

Between January and February 2026, the ADL Center on Extremism researchers conducted systematic enforcement testing by reporting 253 pieces of violative content through Instagram's standard user reporting system. Of 150 reported accounts and 103 reported posts linked to white supremacist networks, designated Foreign Terrorist Organizations and vendors selling Nazi merchandise, Instagram removed only 11 accounts and 8 posts. In 20 cases, Instagram explicitly stated it lacked the bandwidth to review the reports.

"Instagram is developing into a hub for hate and antisemitism, and our research demonstrates this clearly," said Jonathan Greenblatt, ADL CEO and National Director. "Meta's moderation rollback has created a permissive environment where extremists thrive, bad actors turn Instagram’s own features into amplification tools for hate, and as a result, vulnerable communities suffer. As a company operating some of the largest communication platforms in human history, it is imperative that Meta change course to avoid the further normalization of antisemitism, hate and violent extremism globally."

Key findings include:

• 93 percent non-removal rate for reported extremist and hateful content

• 105 accounts affiliated with white supremacist Nick Fuentes' Groyper network were identified by researchers, with more than 1.4 million combined followers. These accounts have regularly posted antisemitic conspiracy theories, Holocaust denial and pro-Hitler content

• 340,000+ followers across accounts directly or indirectly linked to U.S.-designated Foreign Terrorist Organizations, including the Popular Front for the Liberation of Palestine (PFLP)

• 3.2 million+ views on content from a single extremist merchandise vendor selling apparel with Nazi symbols including Sonnenrads, Totenkopfs and SS bolts

The report documents how malicious actors have exploited Meta's user-reporting system. White supremacist networks have developed coordinated strategies to evade detection while also maximizing reach, with one post published with collaborators joking about Hitler running the United States receiving over 2.7 million views and 172,000 likes. Accounts supporting designated terrorist organizations disguise violative content with unrelated captions, while hate merchandise vendors partially obscure Nazi symbols in product photos to evade automated detection.

“Meta's decision to gut content moderation puts Instagram at risk of being a megaphone for the world's most dangerous antisemites and extremists," said Oren Segal, ADL Senior Vice President for Counter-Extremism and Intelligence. "When a platform used by 80 percent of American adults under 30 allows pro-Hitler content to rack up millions of views, Holocaust denial to spread unchecked and terrorist organizations to fundraise openly, we're not talking about a policy disagreement. We're talking about a public safety crisis. Mark Zuckerberg acknowledged Meta would 'catch less bad stuff' after the rollback. Our research proves he was right, and the consequences are deeply concerning."

The report emphasizes that Instagram's reach makes Meta's moderation failures particularly dangerous. According to Pew Research Center studies, the platform is used by 50 percent of teens aged 13-17, and the hate they encounter does not remain online. It bolsters the ranks of extremist movements, spreads terror messaging and finances offline violence.

Despite claiming to be personally banned from Instagram since 2021, white supremacist influencer Nick Fuentes has circumvented the ban through a network of at least 105 affiliated accounts that boost his hateful rhetoric. These accounts post clips from Fuentes' livestream show that have accumulated millions of views. When moderation pressure increases, accounts temporarily deactivate, then resume once the risk is reduced.

The report also documents at least 23 active accounts spreading Islamic State and Al-Qaeda propaganda, and 33 accounts linked to the PFLP, including at least one that has fundraised for the designated terrorist organization through its Instagram bio, likely in violation of U.S. laws. Chapters of the PFLP-tied group Samidoun, sanctioned by both the U.S. and Canadian governments as a terrorist fundraising front, were also found operating on the platform.

ADL calls on Meta to take four concrete steps to restore safety:

• Reinstate proactive moderation measures against violative content

• Commit to reviewing all user reports at scale and audit automated systems

• Restore meaningful researcher data access through tools like CrowdTangle or specialized APIs

• Partner with ADL experts on hate and extremism to better identify violative content.

"The ADL report raises important questions about whether Meta’s current approach to harmful content is adequately protecting users and the long-term interests of shareholders," said Ari Hoffnung, JLens Managing Director and ADL Senior Advisor on corporate advocacy. "For a company whose business model depends overwhelmingly on advertising revenue, safeguarding brand safety and user trust is essential. Investors should expect stronger oversight, clearer disclosure and measurable progress in how Meta is managing these risks."

This report builds on extensive shareholder advocacy by JLens at Meta. In May 2025, a shareholder proposal aimed at strengthening Meta’s accountability for harmful content across its platforms was the highest-supported human-rights-related shareholder proposal of the 2025 proxy season. The proposal received 46.8% support from independent shareholders. JLens has since resubmitted the proposal for consideration at Meta’s 2026 annual meeting.