The Anti-Defamation League (ADL) is urging the US Supreme Court to clarify that Section 230 of the Communications Decency Act does not automatically give social media platforms immunity from the consequences of not moderating and removing hate and extremist content that creates harm.
According to the ADL, the case of Gonzalez v. Google, brought by the family of an American woman killed in an ISIS attack on a café in Paris, raises questions about whether Section 230 protects platforms when their algorithms target users and problematic content, such as terrorist propaganda.
“ADL has long called for the interpretation of Section 230 to be updated, as the law, as currently applied by the lower courts, provides sweeping immunity for tech platforms even when platform product features and other conduct contribute to unlawful behavior – including violence,” ADL said.
The advocacy organization’s brief in the case asserts that Section 230 has been interpreted too broadly by the lower courts.
“For too long Section 230 has been misinterpreted by lower courts as granting near-blanket immunity to social media companies from the threat of liability,” said Lauren Krapf, ADL Technology Policy and Advocacy Counsel.
“Platforms are doing more than merely statically hosting extremist content: they are recommending it, amplifying it, and even auto-generating content. It should be possible to challenge social media companies’ actions in court when their own tools have helped facilitate offline harm.”
“At the same time, social media companies need to be able to rely on crucial provisions of Section 230 to moderate third-party content when it is harmful and violates their community guidelines,” she added.
The ADL called for a “whole-of-society response” to online hate and also for the updating Section 230 “to ensure that tech companies can be held accountable for content that foments hate and bigotry when their own tools exacerbate it in ways that produce harm, while still protecting free speech.”