Facebook
FacebookiStock

Germany’s high court ruled that Facebook illegally removed alleged hateful content without providing explanations to users, reported the Washington Free Beacon.

The ruling has the potential to compel the social media giant to alter how it moderates content.

On July 29, Germany’s Federal Court of Justice said that Facebook broke German law by removing a German user’s posts criticizing migrants without giving a warning beforehand or an explanation for why the posts were removed, thereby not giving the user the ability to appeal the deletion of the posts.

Facebook stated that it will closely examine the ruling in order to "to ensure that we can continue to take effective action against hate speech in Germany."

Facebook had argued that it acted in a legally sound manner by removing the user’s posts, as the user was bound by the company’s terms of service.

However, the German court did not agree with Facebook’s argument. It stated that the terms of service "unreasonably disadvantage the users of the network contrary to the requirements of good faith."

If Facebook is no longer allowed to automatically ban users or content, it would be uncharted territory for the platform that has relied on that method to moderate content.

Facebook told the court that having to tell every user why their content was being removed would prove “unworkable” due to the volume.

Facebook uses AI algorithms along with overseas contractors to manually go through potentially problematic posts.

In the past, the site had been sharply criticized for allowing Holocaust denial before it was finally banned in February 2020.

Facebook and other major social media sites have faced mounting criticism in the last few years for not doing enough for remove anti-Semitic and other hate content from their platforms.