Combating fake news: MEITY should allow legal-flexibility based on social media platforms over one-size-fits-all regulatory approach

0
880

by Prannv Dhawan and John Simte Jun 25, 2019

Indian social fabric is reeling under the monumental onslaught of hatred and misinformation even as our political discourse has become increasingly toxic and vile.

The recent incident of mob-lynching represents yet another manifestation of the extreme abuse of communication technology and social media. It is an undeniable fact that the social media universe is the breeding ground of the vice of Islamophobia, which has given a spurt to communal incidents in the country. The direct and disastrous consequences of such content, circulated by social media in India, cannot be downplayed nor dismissed arbitrarily. In our diverse society, the cohesion we enjoy must not be taken for granted in the face of unregulated technologies that people are not trained to use.

In this particular context, while the need for robust and meaningful regulation of fake news has been underlined in the legal and policy discourse, it is contended that it would have undesirable chilling effects on ‘free speech’.

Even former minister for Telecom & Information and Broadcasting, Manish Tewari, has strongly argued for regulating technological mediums by making social media platforms liable when their networks become catalysts of mass social and economic disruption. Even as the concerns about implications for democratic discourse, social fabric and national security are highly important, the interests of India’s over 34 crore social media users must also be recognised. This is important because the editorial checks by social media platforms in order to remove the flagrant content have been proven to be extremely inadequate.

Representational image.

That said, an often-used argument, especially in free-speech protective legal jurisdiction such as the United States, against the imposition of any regulatory methods, is the metaphor of “marketplace of ideas.” It is interesting to note here the inherent assumption in this argument when applied in this context — that the dissemination of information exists on a level-playing field and that power/knowledge has no role to play in this ‘market’. However, as we have learned from the ‘troll’ army, cultivated by the right-wing ecosystem, using the vast amount of resources available — what counts as true or false information, or as Foucault teaches us that the ‘regimes of truth’/’general politics of truth’ in a society, is entirely dependent on who wields or exercises overwhelming power and gets to be the monopoly in this marketplace. Therefore, despite an algorithmic push by platforms that host content to counter fake news or misinformation, it is bound to terribly fail because of the sheer complexity of language and speech that is used for communication and will only continue to serve as a vindication for the myth of the technological fix.

The question of legal regulation leads to the challenge of defining this ambiguous phenomenon. Normatively, what exactly constitutes ‘hate’ or ‘toxic or objectionable content’ remains subjective. This subjectivity creates problems for any regulator and also hinders the use of automated technologies towards such use.

Even the determination of falsity involves the separation of opinions, facts and the truth. This dilemma was succinctly stated by Michael Herz & Peter Molnarin their seminal work, “One person’s ‘trolling’, after all, is another person’s ‘good-faith discussion’, and God help the regulator tasked with drawing a line between them.” It is at this altar of subjectivity and context that any attempt by governments to lay down objective laws to regulate ‘objectionable content’ falls flat. This is further complicated by differences in laws, ideology, culture and society across the borderless landscape of the internet.

(Also Read: Aftermath of Pulwama attack shows WhatsApp’s India strategy to contain fake news is flawed)

In February 2019, the MeitY released the Draft Intermediary Guidelines, 2018 under Section 79 of the Information and Technology Act in what seemed to be in the immediate aftermath of and as a state response to a series of public lynching(s) by vigilante groups that led to the killing of Muslim individuals suspected of carrying illegal cattle or beef meat. These guidelines emerged after what was termed as ‘secret consultations’ that the MeitY conducted with internet companies and also soon after a notice was issued to WhatsApp by the MeitY for ‘abetting fake news’ on its end-to-end encrypted messaging platform.

However, this move by the MeitY has instead introduced complex questions of curbs on the constitutionally protected right to freedom of speech and expression through censorship and is akin to a regulatory overkill. It will also have a tremendous impact on the functionality and product design of leading messaging platforms as well. The challenge of intermediary liability and over-regulation is also significant. It is difficult to have safeguards for harmless discussions or even purely satirical textswhich could be subjected to censure. Moreover, the intermediary platform’s regulation of communication and the effectiveness of their regulation would always be subject to nature of government oversight. In this case, due to intermediary liability and the consequent state control over determining the effectiveness of social media platforms in regulating fake news can result in indirect regulation of online content, as per the whims of the ruling government.

Facebook, Google and Twitter banners. Image: Reuters

The current legal framework and the path that the MeitY seems to chart, as the designated nodal authority to decide the future of Anti-Fake News Law in India is fundamentally geared towards the creation of a state-monitored surveillance security regime, similar to what we have seen in other jurisdictions like Singapore and Malaysia. Here, instead of injecting transparency into the systems and processes that intermediaries adopt (for regulating and moderating content that is hosted on their platforms) or fixing greater proportionate accountability on the ‘intermediaries’, it allows for significant intrusions to privacy and undermining of free speech.

The Germanapproach of imposing heavy penalties in addition to criminal culpability is also not desirable as it would lead to over-cautious censorship, unintended harms to users and make technology startups unsustainable.

As noted earlier, the regulatory reaction of separate governments across the world to combating fake news has been deeply invasive towards the individual’s informational privacy and autonomy due to their emphasis on ‘traceability’. Therefore, a necessary course of action that is adopted in the future should thus involve exploring alternatives. These can include a multilateral approach aimed towards creating a globally harmonised framework for regulating online hate speech that incites violence or even technological innovations such as improving ToS (Terms of Service) agreements of ISPs and end-user screening software.

In this context, New Zealand’s Prime Minister, Jessica Ardern’s response to Christchurch massacre can be a revolutionary harbinger for developing a global response to this polycentric challenge. Constitutional citizens in India, concerned with the rising incidents of majoritarian hate crime as well as the sledgehammer method adopted by the State, must also collectively mobilise support for the ‘Christchurch Call’, a non-binding multilateral agreement that calls for social media giants to meaningfully clamp down on violent and toxic content.

Community consciousness and encouragement of anti-fake news initiatives is also crucial so as to ensure that a critical mass of public discourse rejects malicious attempts of hatemongering and misinformation.

Initiatives like AltNewsmust gain more public support and even State support so that they can fulfil the necessary public duty of combatting fake news. Even the technological platforms and social media companies must step up their effortsat awareness and sensitisation.

(Also Read: How India’s unsung fact-checking heroes are gearing up to fight misinformation during elections)

Yet, it is also important that we continue to re-imagine our existing regulatory architecture for a foundational reconstruction of existing methods and approaches to significantly address the problem of fake news. This re-imagination must shift from the current ‘one-size-fits-all’ model to allowing ‘legal flexibility’ that can be customised per the function of each intermediary. A transformational policy shift in this direction will protect the privacy and informational rights, promote and strongly protect ‘safe harbours’ but also at the same time make platforms liable for mass social and economic disruptions especially in the deeply divided and sectarian times that we live in.

The authors are students of National Law School of India University, Bengaluru

The article was published in www.firstpost.com om 25 June 2019