by T.C. Sottek, The Verge
The systems that automatically enforce copyright laws on the internet may be expanding to block unfavorable speech. Reuters reports that Facebook, Google, and other companies are exploring automated removal of extremist content, and could be repurposing copyright takedown methods to identify and suppress it. It’s unclear where the lines have been drawn, but the systems are likely targeted at radical messages on social networks from enemies of European powers and the United States. Leaders in the US and Europe have increasingly decried radical extremism on the internet and have attempted to enlist internet companies in the fight. Many of those companies have been receptive to the idea. But their definitions of extremism, and how automated systems are being used, is still a secret; neither Facebook or Google would confirm these efforts to Reuters, which relied on two anonymous sources who are “familiar with the process.”
So far, major internet companies have relied on their users to flag illegal or restricted content. Earlier this year, Facebook said its users flag more than one million items for review every day. Twitter has been busy playing whack-a-mole against ISIS-related accounts at a furious pace, suspending 125,000 accounts as of February. And Google said it received over 75 million DMCA takedown requests in just one month in 2016. There’s also precedent for using automated systems to flag other kinds of illegal content; several major internet companies, including Microsoft, Twitter, Facebook, and Google, use automated systems to identify the transmission of child pornography.
But upgrading automated systems for the suppression of extremist content would be a step with potentially serious and unknown consequences, since existing systems that take down content for suspected copyright and other violations deal with huge volumes of information and are routinely abused to suppress legal speech. Additionally, suppression of extremist speech may contain a lot more grey area than clearly-defined illegal content, like pirated media and child pornography.
The secret identification and and automated blocking of extremist speech would raise new, serious questions about the cooperation of private corporations with censorious governmental interests. Governments and private individuals have already attempted in recent years to hold internet companies and service providers liable for the actions of third-parties with varying degrees of success; the EU’s right to be forgotten rule now requires companies like Google to comply with individuals who want to scrub search results that point to their sensitive personal information. That rule has already been abused to try to suppress journalism.
[read more here]