As the E.U. continues to develop tactics to better combat terrorism, European authorities plan to propose strict rules about content policing by tech giants such as Google, Twitter and Facebook.
European Commission President Jean-Claude Juncker said Wednesday that the proposed rules would specify that big fines – up to 4 percent of annual global turnover – would be on the table if online sites don’t consistently take down extremist content within one hour of it being reported.
In a speech to the European Parliament during his state of the union address, Juncker said that the timeframe represents “the critical window in which the greatest damage is done.”
He explained that extremist content – defined by the E.U. as “propaganda that prepares, incites or glorifies acts of terrorism” – would be flagged by national authorities at individual E.U. member states, who would issue removal orders. The regulation would also specify that companies would have to monitor and ensure to that the content isn’t re-uploaded after removal, perhaps through the use automation, artificial intelligence or other technology; and, providers would be required to release annual transparency reports to document their efforts.
Taking the recently implemented GDPR privacy rules as a precedent, the rules would apply to all hosting service providers offering services in the E.U., regardless of size, even if they’re not based there.
The proposal still needs approval from E.U. lawmakers and member states.
“You wouldn’t get away with handing out fliers inciting terrorism on the streets of our cities – and it shouldn’t be possible to do it on the internet, either,” E.U. security commissioner Julian King said in a statement to the Hill.
The move is an expansion of a voluntary code of conduct adopted by Facebook, Microsoft, Twitter, YouTube and others that has been in place since 2016, devoted to stopping extremist groups from using the internet to radicalize, recruit, train, facilitate and direct terrorist activity. The E.U. said that the effort has been relatively successful: “Currently 70 percent of illegal online hate speech is removed upon notification and in more than 80 percent of the cases the assessment is made within 24 hours,” it said in a notice about the proposed new rules. “The initiative has also been extended to other online platforms.”
The news comes as social-media giants take steps to limit foreign influence campaigns and planted content around upcoming elections and political events in the U.S. and beyond. Combatting terrorist content is a different facet of the same lens.
“We share the European Commission’s desire to react rapidly to terrorist content and keep violent extremism off our platforms,” Google said in a media statement. “We welcome the focus the Commission is bringing to this and we’ll continue to engage closely with them, member states and law enforcement on this crucial issue.”
For its part, Facebook said it was committed to the goal: “We’ve made significant strides finding and removing terrorist propaganda quickly and at scale, but we know we can do more.”