post-13421

Europe Leads the Way in Mandating Moderation of Hate Speech

ModSquad

Over the past decade, the power of social media has played a significant part in shaping modern history, from political campaigns to revolutions. And while social channels were originally designed to give users a platform to share, those same sites are often abused by trolls and bullies, or used for data-stealing schemes.

Recently, we’ve seen the promise of regulations that will require host companies to recognize and censure dangerous content. Countries including the United Kingdom and Germany have put their foot down, forcing social publishers to police their own content. With requirements like these coming into place around the world, it’s never been more vital to understand the guidelines and be able to offer smart, safe community management.

In the UK, a parliamentary committee report accused major social media players of looking the other way on illegal content and called for large fines for companies unwilling to clean up their space. The report cited numerous examples of social media companies allowing material supporting terrorist recruitment, sexual abuse, and hate speech, even after being flagged as inappropriate.

In Germany, a law was passed in June requiring companies to pay fines as large as $57 million for failing to remove illegal or racist content within 24 hours of initial notification. With an ongoing fight to quell the publication of illegal material, the Germans noted the pressing need to take matters into their own hands. A 2017 study found that, left to their own devices, Facebook and Twitter were lacking in responsiveness, missing an existing national goal of removing at least 70 percent of all hate speech posted on their networks within 24 hours.

Many social media platforms lean on their communities (users and page admins) for handling or removing questionable content that appears on their own pages. Critics suggest these companies aren’t doing all they can, noting that copyright-infringing content often gets quicker attention than hateful content. A recent German report showed that when flagged content is expected to be pulled in 24 hours, YouTube does well, pulling 90 percent of offending content. Other companies have much lower success, with Facebook removing 39 percent of the content and Twitter a mere one percent.

For its part, Facebook is adding workers in an attempt to tackle the problem head-on; for these 7,500 workers (half of them added as part of this new push), the mandate is to clear the site of flagged posts ASAP. It’s an important move, because these new regulations put the onus of hate-speech and criminal-behavior removal on the social media companies themselves and not upon those posting on those networks, including brands and other companies that may fall victim to such posts on their own pages.

Critics claim these regulatory moves leave free-speech decisions in the hands of social media companies, and that these entities could wind up banning anything even remotely questionable in order to avoid fines. It’s a valid point: For social sites, even those with deep pockets, a harsh financial penalty will likely push them to be much more stringent about content regulation. The days of claiming an inability to control user-generated content are likely coming to an end. It’s in the best interest of these media companies — for that’s what they are — to provide a safe environment with stronger internal control over what gets published on their networks. Whether page owners trust the social media companies to properly regulate content is another matter. Still, the smart solution for any entity representing their brand on social media includes close scrutinization by informed content moderators.