Ever since the outbreak of COVID-19, social media platforms have taken on a more active role in addressing information disorder (misinformation, disinformation and malinformation). From flagging posts, to actively pointing users to authoritative sources of information and even accelerating verification of accounts belonging to public health authorities, they have adopted a range of measures in the fight against the pandemic. However, the flip-side of this new-found interventionism is that calls for these platforms to take a more active role in addressing harmful content have only grown more strident.
Acting against Trump
One of the most polarising figures on social media is US President Donald Trump. For years many have asked the question 'What will it take for Twitter to act on his tweets?' In March, the company deleted tweets by Brazil's Jair Bolsonaro and Venezuela's Nicolas Maduro for COVID-19 related misinformation. After that, it became a question of when, not if, it would take action against content posted by Mr. Trump. It took another step in this direction by blocking the hashtags "InjectDisinfectant" and "InjectingDisinfectant" after Trump suggested it could be a cure for COVID-19 but stopped short of acting on his original posts.
It finally took the plunge on May 28 when it flagged tweets in order to enforce its 'civic integrity policy', which aims to prevent election interference and misleading voters. And the proverbial lightning struck twice, when a day later it placed a 'Public Interest Notice' on another tweet for violating its glorification of violence policy. The content of this tweet was replicated by the official handle of the White House and it too, was flagged.
Facebook which, as Evelyn Douek, lecturer at the Harvard Law School who studies global regulation of online speech, points out, has a policy on voter suppression and intimidation that it could have applied on similar content posted on its platform. It had previously taken action against content posted by Jair Bolsonaro, again, for COVID-19 related information. Instead, it chose not to take action stating, not for the first time, that private companies should not be 'arbiters of truth'.
US government response
The reaction from the American Executive was swift, an Executive Order was signed. This order will insert itself into a very long-standing debate on Section 230 of the Communications Decency Act which provides protection for 'Good Samaritan' blocking and screening of offensive material. The clause makes moderation possible since before this, as technology analyst Ben Thompson points out, an attempt to moderate any user-generated content could make an electronic service liable to moderate all of it. In fact, section 230, has been described by cybersecurity law professor at the United States Naval Academy Jeff Kosseff as 'the 26 words that created the internet.’
In parallel, Yoel Roth, head of site integrity at Twitter, was attacked by right-leaning media outlets and social media accounts for having previously posted anti-conservative views on Twitter. Accusations of pro-liberal biases on the internet are a long-running theme. As far back as 2006, Conservapedia – a conservative antidote to Wikipedia's alleged liberal bias, went online. And as recently as last week, there were reports of a panel that would look into allegations of bias against conservatives in the United States by social media platforms.
Indian experience
This is unsurprising given how tribalism is firmly part of the social media package. We've seen this in India too. A photograph of Twitter CEO, Jack Dorsey, holding up a poster which said 'Smash Brahmanical Patriarchy' enraged many in November 2018. In February 2019, the IT Standing Committee 'summoned' Jack Dorsey to appear in front of it, after concerns that the platform was not safeguarding 'citizens rights'. This action followed a #ProtestAgainstTwitter campaign accusing it of an 'anti-right wing attitude'.
Later in the year, it faced ire from the other end of the ideological spectrum when lawyer Sanjay Hegde's Twitter account was suspended leading many users to temporarily adopt Mastodon. And we now appear to have come a full circle. In March, Wikipedia was accused of an 'anti-Hindu' bias after a tussle between editors on a page on the 2020 Delhi riots. And in April, Twitter suspended media personality, Rangoli Chandel's, account for a post containing 'dangerous speech'. Claims of unjustified account suspensions or shadow bans continue.
After the executive order, some Indian news channels even held TV debates on whether this should be replicated in India to check what is perceived to be selective or biased censorship by social media platforms. And at the time of writing this, there appears to be some limited activity on Twitter using #TwitterTimeUp and #Twitterpickssides in the aftermath of that debate.
The graphic illustrates a diffusion chart of tweets using the hashtag #TwitterTimeUP
Blind imitation won’t work
But before resorting to isomorphic mimicry, it is important to understand what the executive order proposes. A reading suggests that it seeks to narrow the definition of 'good faith' under which a platform can carry out 'Good Samaritan' blocking. Experts in the field such as Kate Klonick were quoted in the media as saying that the order was not enforceable and even referred to it as 'political theatre'. Daphne Keller published an annotated version of the order in which she classified various sections as 'atmospherics', 'legally dubious' and points on which 'reasonable minds can differ'.
The current trajectory in India appears to be headed in the opposite direction. A recent PIL in the Supreme Court, filed by a BJP member, sought to make it mandatory to link social media accounts with identification. While the petition itself was disposed of, the petitioner was directed to be impleaded in the ongoing Whatsapp Traceability case. The draft Personal Data Protection law proposes 'voluntary' verification for social media intermediaries.
And the draft version of the Intermediary Guidelines of the Indian IT Act, published in November 2018, seeks to put more responsibility on platforms by changing the 'safe harbour' protection they currently enjoy. Assuming that the guidelines do not change substantially in their final form, platforms will resort to over-regulation of speech in order to avoid liability. The result of such a regulatory regime will likely be, more, not less censorship.
(Prateek Waghre is a research analyst at The Takshashila Institution)
Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.