Monday, July 27, 2020

Why Congress should look at Twitter and Facebook

Twitter recently announced it would take action against accounts posting information related to the QAnon conspiracy theory, whose adherents follow the “breadcrumbs” left by a mysterious figure known as Q in cryptic messages posted about the Trump administration on anonymous online message boards. In response to their spread of misinformation and harassment, more than 7,000 accounts will be permanently banned, while others will be removed from its search and trending pages. 

It’s a significant move not simply because of its scope, but also because many of these accounts are run by people based in the US. For the past few years, we’ve seen tech platforms willingly take action against “coordinated inauthentic behavior” by foreign actors. That’s an industry-coined term of art contested by researchers like me. Analyzing the evolution of efforts to suspend and remove accounts linked to disinformation campaigns is not easy, since these companies have only recently begun providing transparency reports. While Twitter publishes full archives of account sets it has removed, Facebook posts blogs about takedowns without providing significant access to data for auditing. Nevertheless, Facebook and YouTube have removed US accounts for hate speech: recently Facebook removed a legacy network of fake accounts linked to Roger Stone and US hate group the Proud Boys, and in late June, YouTube banned several longtime US content creators for hate speech.

Removing popular individuals—and not just foreign influencers—is a significant step in the battle against disinformation, because influencers depend on their name as a brand. Without access to their name as a keyword, they experience difficulty reestablishing their audiences on other platforms. This is precisely why deplatforming works to prevent misinformation and harassment.

Removing large-scale networks of accounts has a different, but no less significant, effect. Changes to the information ecosystem reduce the amplification power of these groups; removing the networked faction of QAnon accounts ahead of the election is notable because they are a significant node in the new MAGA coalition. Without this network of superspreaders on Twitter, it will be more difficult to coordinate the manipulation of search engines and trending algorithms. 

But even if they succeed in reducing the spread of conspiracy theories, these actions reveal the twin problems facing online platforms: some speech is damaging to society, and the design of social-media systems can compound the harms. 

Speech monopolies

All these interventions come as Amazon, Apple, Google, and Facebook have been asked to testify in front of the House Judiciary Antitrust Subcommittee. The hearing—now delayed until Wednesday—is part of a series exploring “Online Platforms and Market Power” and will call Jeff Bezos of Amazon, Tim Cook of Apple, Sundar Pichai of Google, and Mark Zuckerberg of Facebook. 

Republicans have sought to invite others, including Twitter CEO Jack Dorsey—but also an outlier, John Matze, the founder of the right-wing app Parler. Parler has built its brand on the back of claims that Twitter censors conservatives, and it recently went on a sprint to recruit Republican politicians. In July, Matze was a guest on a podcast that routinely features white nationalist and misogynist content and had been banned from YouTube in 2018 for hate speech. During the interview, Matze expressed pride that he provides a platform for those who have been removed from other platforms, such as Laura Loomer, Milo Yiannopoulos, and Jacob Wohl. On Parler, these figures have their content served alongside contributions from Republican figures the likes of Rand Paul, Ted Cruz, and Matt Gaetz, among others

Minor apps provide alternative infrastructure for communities trafficking in hate speech.

Research by me and my colleagues on the development of another app, Gab, which gained limited popularity by promoting itself as a safe haven for “free speech” following the white supremacist violence in Charlottesville, Virginia, illustrates the serious limitations of minor apps that provide alternative infrastructure for communities trafficking in hate speech. 

Gab has become synonymous with white supremacists, who use the platform to organize violent events. In many ways, Parler is following a similar playbook by promoting itself as a liberation technology that values the First Amendment above all else. This marketing push is riding the wave of attention to the ongoing crisis tech companies face in their struggle to moderate hate speech, disinformation, and harassment. But what is the point of becoming a safe space for such vitriolic influencers? Parler is using the current moment to recruit new customers, essentially providing the same old service in such a way that the susceptibility to manipulation is deliberately built in.

If the congressional hearings explore conspiracy theories and hate speech, they will be an important marker—not least because medical misinformation and political disinformation are becoming enmeshed in new and dangerous ways, at exactly the same time that those of us in soft quarantines have come to rely on social media more than ever. Forget hate speech: scams, hoaxes, and health misinformation are propagating at alarming rates across our media ecosystem. 

A race to the bottom

Platforms have too much power to sway public opinion and influence behavior—and that directly affects our capacity to engage in deliberative debate and to know what is true

Twitter may no longer be “the free-speech wing of the free-speech party,” but the alternatives seem to offer little other than a race to the bottom, especially if they refuse to acknowledge that the ideology of free speech as an unmitigated good is what got us into this terrible position in the first place. 

Twitter’s actions on QAnon accounts seek to redress this imbalance, but right now, the design of our media ecosystem has advantaged disinformers and those who amplify them. 

In politics, this is anti-democratic; with medical misinformation, it is physically dangerous. If Congress chooses to look in the right places, it may discover the true cost of social media— and the fact that we cannot build a healthy democracy on top of an ailing communication system.



from MIT Technology Review https://ift.tt/2D7GQFF

No comments:

Post a Comment