It’s more than four years since major tech platforms signed up to a voluntary pan-EU Code of Conduct on illegal hate speech removals. Yesterday the European Commission’s latest assessment of the non-legally binding agreement lauds “overall positive” results — with 90% of flagged content assessed within 24 hours and 71% of the content deemed to be illegal hate speech removed. The latter is up from just 28% in 2016.
However the report cards finds platforms are still lacking in transparency. Nor are they providing users with adequate feedback on the issue of hate speech removals, in the Commission’s view.
Platforms responded and gave feedback to 67.1% of the notifications received, per the report card — up from 65.4% in the previous monitoring exercise. Only Facebook informs users systematically — with the Commission noting: “All the other platforms have to make improvements.”
In another criticism, its assessment of platforms’ performance in dealing with hate speech reports found inconsistencies in their evaluation processes — with “separate and comparable” assessments of flagged content that were carried out over different time periods showing “divergences” in how they were handled.
Signatories to the EU online hate speech code are: Dailymotion, Facebook, Google+, Instagram, Jeuxvideo.com, Microsoft, Snapchat, Twitter and YouTube.
This is now the fifth biannual evaluation of the code. It may not yet be the final assessment but EU lawmakers’ eyes are firmly tilted toward a wider legislative process — with commissioners now busy consulting on and drafting a package of measures to update the laws wrapping digital services.
A draft of this Digital Services Act is slated to land by the end of the year, with commissioners signalling they will update the rules around online liability and seek to define platform responsibilities vis-a-vis content.
Unsurprisingly, then, the hate speech code is now being talked about as feeding that wider legislative process — while the self-regulatory effort looks to be reaching the end of the road.
The code’s signatories are also clearly no longer a comprehensive representation of the swathe of platforms in play these days. There’s no WhatsApp, for example, nor TikTok (which did just sign up to a separate EU Code of Practice targeted at disinformation). But that hardly matters if legal limits on illegal content online are being drafted — and likely to apply across the board.
Commenting in a statement, Věra Jourová, Commission VP for values and transparency, said: “The Code of conduct remains a success story when it comes to countering illegal hate speech online. It offered urgent improvements while fully respecting fundamental rights. It created valuable partnerships between civil society organisations, national authorities and the IT platforms. Now the time is ripe to ensure that all platforms have the same obligations across the entire Single Market and clarify in legislation the platforms’ responsibilities to make users safer online. What is illegal offline remains illegal online.”
In another supporting statement, Didier Reynders, commissioner for Justice, added: “The forthcoming Digital Services Act will make a difference. It will create a European framework for digital services, and complement existing EU actions to curb illegal hate speech online. The Commission will also look into taking binding transparency measures for platforms to clarify how they deal with illegal hate speech on their platforms.”
Earlier this month, at a briefing discussing Commission efforts to tackle online disinformation, Jourová suggested lawmakers are ready to set down some hard legal limits online where illegal content is concerned, telling journalists: “In the Digital Services Act you will see the regulatory action very probably against illegal content — because what’s illegal offline must be clearly illegal online and the platforms have to proactively work in this direction.” Disinformation would not likely get the same treatment, she suggested.
The Commission has now further signalled it will consider ways to prompt all platforms that deal with illegal hate speech to set up “effective notice-and-action systems”.
In addition, it says it will continue — this year and next — to work on facilitating the dialogue between platforms and civil society organisations that are focused on tackling illegal hate speech, saying that it especially wants to foster “engagement with content moderation teams, and mutual understanding on local legal specificities of hate speech”
In its own report last year assessing the code of conduct, the Commission concluded that it had contributed to achieving “quick progress”, particularly on the “swift review and removal of hate speech content”.
It also suggested the effort had “increased trust and cooperation between IT Companies, civil society organisations and Member States authorities in the form of a structured process of mutual learning and exchange of knowledge” — noting that platforms reported “a considerable extension of their network of ‘trusted flaggers’ in Europe since 2016.”
“Transparency and feedback are also important to ensure that users can appeal a decision taken regarding content they posted as well as being a safeguard to protect their right to free speech,” the Commission report also notes, specifying that Facebook reported having received 1.1 million appeals related to content actioned for hate speech between January 2019 and March 2019, and that 130,000 pieces of content were restored “after a reassessment”.
On volumes of hate speech, the Commission suggested the amount of notices on hate speech content are roughly in the range of 17-30% of total content, noting for example that Facebook reported having removed 3.3M pieces of content for violating hate speech policies in the last quarter of 2018 and 4M in the first quarter of 2019.
“The ecosystems of hate speech online and magnitude of the phenomenon in Europe remains an area where more research and data are needed,” the report added.
from TechCrunch https://ift.tt/3duHRnI
No comments:
Post a Comment