Dangerous Speech in Real Time: Social Media, Policing, and Communal Violence

0
3245

The article examines how a set of key actors—the police, civil society and social media platforms responded to a series of violent incidents in Pune in 2014 that resulted in the death of Mohsin Sheikh. The hate speech by the Hindu Rashtra Sena leaders; the murder of Sheikh; violence and arson against Muslims; the circulation of morphed images; the actions of the police and civil society groups, were all part of an ecosystem of events that occurred at the time. What is new here, when compared to earlier incidents of communal violence is the technology being used—social media through internet enabled mobile phones. This in turn raises a number of legal and technological questions that need to be investigated further. 

The Bombay High Court’s reasoning in an order granting bail to three of the accused in the murder of Mohsin Sheikh, a 28 year-old IT professional who was killed during communal violence in Pune in 2014[i], has raised a number of legal and ethical questions. Justice Mridula Bhatkar’s decision primarily revolves around four factors: the accused did not have any previous convictions, many of the co-accused in the case had already been granted bail, the victim was easily recognisable as Muslim, and related to this was the factor that the accused had been present at a meeting where Dhananjay Desai, a co-accused in the case and a maverick leader of an outfit called the Hindu Rashtra Sena (HRS), had just made a provocative speech.

In 2014, members of the Bharatiya Janta Party, Shiv Sena and the HRS went on a rampage in Pune and other parts of Maharashtra using the pretext of morphed images, including those of Shivaji and the late leader of the Shiv Sena Bal Thackeray, on social media (Katakam 2014). Narratives  trace the violent incidents back to a Facebook profile called Dharamveer Shree Sambhaji Maharaj which allegedly carried derogatory pictures of Shivaji, Thackeray, Ganesha, and Sambhaji Maharaj and that it led to a violent mob damaging 12 buses in Chinchwad (Ansari 2014). The news around this violence spread quickly through Whatsapp, leading to stone pelting and false alarms. A photograph of Nikhil Tikone, a young man from Kasba Peth, made the rounds on Whatsapp portrayed as that of one “Nihal Khan”, the person supposed to have  made the allegedly derogatory Facebook post (Ansari 2014).

The Maharashtra state government deployed a large number of police personnel, including members of the State Reserve Police Force (SRPF) to control the violence. They also got the Facebook post removed through the Indian Computer Emergency Response Team (CERT-In), the national nodal agency responsible for cyber security authorised under section 70-B of the Information Technology Act (Ansari 2014).

Mohsin Sheikh was killed on the night of 2 June 2014 in Hadapsar, two days after demonstrations against the circulation of the morphed images began. More than 12 people were injured in the violence (Ansari 2014). In a span of a few days, more than 250 government buses were burnt, and Muslim businesses, madrasas and mosques were attacked in Hadapsar. The Hadapsar police arrested 21 persons in this case of whom 11 were granted bail between April and June 2016. Dhananjay Desai was refused bail, and remains behind bars in Yeravada prison.

Justice Bhatkar’s logic of how the accused were provoked to murder Sheikh, is a perverse inversion of the relationship between speech and violent action that we are familiar with.Much of the debate regarding speech and violent action has been centred on whether arresting or criminalising a person for speech that allegedly results in violence, is justified or not. The court’s deployment of the markers of Sheikh’s Muslim identity—his pastel green coloured shirt and beard—as a means of arguing that the accused did not have any personal enmity with the victim, is extremely disturbing. Surely a crime committed because of someone’s religious identity cannot be viewed as less serious than one based on purely personal reasons. If such an incident had happened in other jurisdictions, such as the United States (US), it would have been categorised as a hate crime, where injury or death caused because of a person’s actual or perceived colour, race, religion or national origin, attracts a maximum sentence of life imprisonment. This bail order brings to the fore the vexed question of the relationship between the speech act and acts of violence, especially in cases of communally charged speech, which I refer to as “dangerous speech”. The legal scholar Susan Benesch and her colleagues have been grappling with the idea of dangerous speech, which is a subset of public speech and has the capacity of increasing the risk of group violence. Benesch’s concern comes from examples such as the genocide in Rwanda, where mass violence was preceded by speech that vilified and dehumanised the Tutsi community.

Benesch and other scholars, such as Jonathan Maynard, do not think of “dangerous speech” as a new crime that they are defining, but as a framework to understand how such speech can become an important catalyst during mass violence. They advocate strongly non-legal measures such as counter speech to address dangerous speech, and highlight the dangers of criminalisation of dangerous speech. Criminal laws, while framed as tackling dangerous speech, are often used to muzzle political and other dissent, and can backfire as a strategy, often giving more publicity to the speaker (Maynard and Benesch 2016). I find this framework of dangerous speech a useful point of reference to discuss the events in Pune in 2014 and to describe the content on social media that was circulated during, and as a precursor to communal violence, and is widely believed to have served as a catalyst for communal tensions, and violence in the incidents I describe.

As per the dangerous speech framework, the factors that may increase the risk of speech leading to atrocities include the influence of the speaker; whether the audience has grievances and fears that the speaker can cultivate; a speech act that is clearly understood as a call to violence; a social or historical context that is propitious for violence; and a means of dissemination that is influential in itself (Benesch 2012). It is this last factor, the means of dissemination, specifically the circulation of material on social media in the lead up to, and during communal violence in India that I focus on in this paper.

Social Media and Communal and Ethnic Violence

The 2014 violence in Pune is one in a series of communal and ethnic violence in India in the post-2012 period, where content circulating through internet-enabled mobile phones and on social media, has reconfigured the way in which the law, police, and civil society have grappled with this issue. The term social media has been defined as referring to both a) sites and services that emerged globally in the 2000s including social networking sites, video sharing and blogging platforms that allow users to share and post their content and b) the cultural mindset that emerged in the mid 2000s as part of the technical phenomenon called Web 2.0 (Boyd 2014).

I have used the term broadly in this paper to include SMSes and MMSes sent during the exodus of persons from the North East in 2012, and instant messaging platforms such as Whatsapp, referred to as Over the Top (OTT) content.

In August 2012, the circulation of threatening SMSs and MMSs in Bengaluru, Pune, Chennai and other cities with a sizeable population of persons from the North East, was one of the first incidents where the configuration involving dangerous speech, social media and public disorder became visible.. Thousands of persons from the North East, many of whom were workers in the service industry and students, fled these cities fearing for their physical safety.

Circulating at the time were images related to violence against Muslims in Assam, where in May 2014, Bodo militants had killed 32 Muslims in the Baksa and Kokrajhar districts). These  images were accompanied with messages threatening retaliatory violence against persons from the North East living in cities like Bengaluru.[ii]

The reportage by newspapers in the North East about the threats in Bengaluru  further fueled panic, with many parents asking their children who were studying in cities like Bengaluru to return home. Such was the scale of panic that the Indian Railways arranged for nine special trains to Guwahati as thousands of people gathered at the Bengaluru railway station, desperate to go. 

The central government responded to the serious situation by blocking bulk SMSs, a number of websites, twitter accounts, blogposts and blogs that were related to communal issues and rioting (Prakash 2012).Free speech activists have pointed out that despite its best intentions, the government’s actions were marred by procedural irregularities and overreach.[iii]   In Uttar Pradesh in 2013, the circulation of a video portraying the brutal lynching of two men by a mob, which is now believed to be that of a lynching in Sialkot, Pakistan, played an important role in the events leading up to the Muzaffarnagar riots in August-September 2013. More than 60 people were killed and more than 50,000 displaced in the riots. The accompanying text and audio messages falsely claimed that the video was of a group of Muslims lynching two Hindus in Kawal, where two Hindu men and a Muslim man had been killed as a result of a confrontation over an alleged incident of eve teasing.

The circulation of this video played an important role in the mobilising of the Jat community, as well as in creating an atmosphere of distrust among people in the region. According to newspaper reports, the Vishnu Sahai Commission Report, instituted by the Uttar Pradesh government to investigate the riots, states that the circulation of the Kawal video, was a significant factor in the events leading up to the riots. The report indicts Sangeet Som, the Bharatiya Janata Party’s Member of Legislative Assembly from Sardhana in Meerut district, who has been charged under section 153A of the IPC for sharing the Kawal video on his Facebook page (Raghuvanshi 2015).

Policing Social Media in Pune

There was intense media scrutiny was directed at the actions of the government and police during the communal violence in Pune because the violence had occurred just after the BJP’s sweeping victory in the 2014 general elections. Many people feared that the political changes at the centre would lead to a rise in anti-Muslim speech, and heighten communal tensions[iv]

After the outbreak of violence on 31 May 2014 in Pune, members of civil society, worried about the fallout of the riots and the continued communal tensions in the area. They began to address the circulation of morphed and incendiary images that they felt had vitiated the atmosphere in the city. A group of citizens in Pune who called themselves the Social Peace Force (SPF)  had formed a group on Facebook in 2013. They had worked together towards relief efforts during a drought in the region.[v] As an immediate response to the outbreak of violence in May 2014, they evolved a novel, but effective method of addressing the material that was circulating on social media at the time. On their Facebook page, members of the SPF described themselves as a youth group to stop anti-social messages on Facebook. They took on the role of civil society watchdogs, and began monitoring content on social media, both to try and get material taken down, as well as to educate the public about the use of social media. While their Facebook group had more than 20,000 members at the time, they formed a core group of 10 persons who examined online content for what they deemed dangerous in the context of the ongoing communal tensions and violence. In order to identify this material, they searched for terms such as “Ambedkar”, “Sita”, “Shivaji”, etc, in multiple spellings and pronunciations. Once they identified such posts on Facebook, they called upon their larger Facebook group to identify this content as spam (Hindistan Times 2014).

On their Facebook page, the SPF state that they will not “Like” or “Comment” but will report spam as it is the easiest technological method of fighting those spreading “anti-national” messages and images. They state their mission as responding to those trying to dismantle “our social thread of unity and integrity and try to destroy our multi-cultural peaceful living, then, why not a group of youths with a noble cause, try to spread peace and huminity amongst the FB users and the society at large? [sic]” [vi]

The SPF members were worried that their communication while identifying and responding to such dangerous speech could have technically been interpreted as a violation of section 66A of the Information Technology Act (which was in force at the time, and which was struck down as unconstitutional by the Supreme Court in 2015). They thus involved the Pune Cyber Crime Cell and the then Maharashtra Minister of State for Home Satej Patil, who became members of their Facebook Group. This model was widely seen as a useful approach in tackling the situation. Anand Shinde, the SPF member I interviewed, said that Facebook publicly cited the SPF’s effort as a successful intervention.[vii]

However, the SPF’s methods drew sharp criticism from the film critic and commentator Shobha De,[viii] the South Asia desk of Human Rights Watch, and the sociologist Nandini Sardesai. They considered these actions as another form of moral policing, with the danger of what is considered offensive becoming a matter of subjective choice. The Social Peace Force’s had said that they did not discriminate on what kind of offensive content was taken down. “If a God’s image is replaced with a model’s body, we would identify this as bad. We did not discriminate based on religion”, said one of its members.[ix]

The Pune police adopted the SPF’s methods, and actively collaborated with them  to address what they deemed as dangerous speech. Besides Facebook, the Pune Cyber Crime Cell actively tracked material that was circulating on Whatsapp. They created a number of Whatsapp groups with representatives of the locality, prominent citizens and members of local societies, police chowkis, and social workers and politicians from the area. Around 3,300 police officers were added to these groups to help track content that was circulating and to keep a tab on the happenings. Citizens were also asked to report incidents where“objectionable”’ content was being circulated to the police, for which the police created an online reporting system where anyone could post urls of offensive or objectionable content.[x] These measures became especially important with a platform like Whatsapp, which is not publicly accessible. The only way for the police to track content on Whatsapp was when someone brought it to their attention, or when they were themselves part of groups in which this material circulated.

The Pune police in turn used bulk SMSes, email, Twitter and Whatsapp to send out messages and police combined their efforts online with more traditional measures at educating people on the use of social media through programmes that they conducted in schools, colleges, housing societies, public spaces, community halls. They invited politicians and leaders of civil society organisations. They collaborated actively with IT experts, social activists, teachers and other citizens. The police printed posters that urged people not to “like” or “dislike” images and content that was communally sensitive, and not to post, comment, share or forward such material. They created films, organised debates and lectures on this theme, and held interactions with elderly people and parents.[xi]

For the Pune police, the SPF Facebook group became an important site of intervention. On the group page, members not only identified material that was “objectionable” but also posted ”responsible” messages and comments. As per the police, they had messages taken off social media sites and blocked websites, after going through the legal process and the CERT- In.[xii] 

The collaboration between the police and the Social Peace Force to tackle dangerous speech online raises important questions on the viability and dangers of such an approach. While no doubt extremely effective, such an approach could be described as social media vigilantism since it did not have clear guidelines on what kind of speech was considered unacceptable. However, given the importance of a real time response in such a serious situation of communal violence, where the threat of the loss of life and property were important considerations, it is important to acknowledge this as a novel model that civil society and the police experimented with. In any case, the Pune police and the Social Peace Force were not the ones who ultimately took down the content. Their role here was to flag content as unacceptable and notify the social media platform concerned, in this case Facebook. In that sense, it is the social media platform which has the responsibility to ensure transparency, accountability and a system of rules and guidelines that users can recognise as standards, and which when enforced in a regularised fashion can begin to act as precedents.

Friction between Social Media Platforms and Law Enforcement

In 2014 Pune, there were also instances when the SPF was not successful in taking down material.  . For instance, they were unsuccessful in persuading Facebook in taking down a video in which people who claimed to be part of a religious group said they would bomb areas in Hyderabad, Mumbai, Delhi and other cities. .[xiii]  This is not an isolated incident. The public order concerns of law enforcement are in tension with the free positions of social media platforms such as Facebook, Google, and Twitter who in many of these cases tend to tilt towards protecting freedom of speech and retaining material online. In India there seems to be much informal negotiation that happens between these platforms and the police, along with formal mechanisms through which they interact with each other.

For many years now, law enforcement agencies in India have been asking that internet platforms follow the Indian law, including the provisions of the Information Technology Act, and speech related to criminal law, which has a relatively wide ambit as well as a history of misuse, as the controversy over section 66A and its subsequent striking down has shown. Internet platforms such as Facebook and Google are headquartered in the US, and follow a more liberal standard laid down by American First Amendment law. This has resulted in a friction between successive Indian governments and internet platforms, and often tied the hands of law enforcement agencies when it comes to enforcing Indian law that regulates content on social media. 

If law enforcement agencies want to remove content from a social media site like Facebook and Google who are based in the US, and the social media platform refuses to do so, they can go through the Mutual Legal Assistance Treaty (MLAT) process.  MLAT is a formal way in which law enforcement from one country can get assistance related to evidence related to internet records for criminal investigations and proceedings in a foreign country.[xiv] The MLAT process involves multiple U S agencies including the Departments of Justice, State and Commerce. Due to the increasing number of requests for computer records, the workload has not kept pace with the demand, which means that there is no clear timetable once a request is put in. Many police officers in India are not happy with this situation.[xv], This is one of the reasons that the Indian government has been pushing for internet platforms to locate their servers in the country, which they claim will help address dangerous speech in real time.

Social media platforms such as Facebook, Google and Twitter have their own internal mechanisms meant to address the problem of dangerous speech. In recent years, because of increasing pressure from a number of governments, especially from governments in Europe, these companies have strengthened these internal mechanisms. Google, for instance has instituted a flagging mechanism on YouTube where users can flag content that violates YouTube’s Community Guidelines. Since 2000, more than 92,000 videos have been taken down as a result of community members flagging content.[xvi] Facebook has set up an Online Civil Courage Initiative that works in collaboration with non government organising in Germany, France and UK to tackle hate speech on the platform. Twitter has instituted a Trust and Safety Council, which works with academics, advocates and researchers to tackle harassing and harmful speech.[xvii]

These platforms are also investing in local language expertise, and new technologies to identify and root out speech that does not meet their internal standards. For instance, Jigsaw (formerly Google Ideas), a technology incubator created by Google, is investing in artificial intelligence that will make keyword searches more accurate, and has been testing an algorithm that will help identify such keywords in the comments section of websites.[xviii]

Facebook now provides numbers over a six monthly period related to government requests for data as well as requests to take down content. However this information is cursory. It involves the overall numbers related to government requests but not how they follow them up, how many of these have been dealt with, whether they are accepted or not, or the reasoning behind facebook’s decisions.  In the period between January and June 2016, the majority of content to which access has been restricted in response to legal requests from law enforcement agencies and CERT-In was regarding  violation of Indian laws related to anti-religious speech and hate speech.[xix] After the Supreme Court decision in the Shreya Singhal judgment, Facebook has stated that they will not remove content unless received by a binding court order or notification from an authorised agency which conforms to the constitutional safeguards laid down by the court.[xx]Similarly, Google’s Transparency Reports provides details of the numbers of requests from governments and courts to remove content.

From a free speech perspective, the fact that most governments do not wield direct influence on social media platforms, taking down content is an important protection for users of these platforms, especially for dissenting and minority voices.  At the same time, it is important for these platforms to make the process and guidelines through which they evaluate content more transparent, so that users, police and civil society actors have a clearer sense of what kind of material is likely to be taken down. This in turn could limit dangers of internet vigilantism, while at the same time, helping to tackle dangerous speech on their platforms.

Social Media Labs and Infrastructures of Monitoring

The Pune Cyber Crime Cell, which played a key role in the response to the communal violence in the city in 2014, reports to the police commissioner and its jurisdiction extends to the entire city. In 2015, out of the requests the cyber crime cell received, around 15% deal with hate speech/objectionable material. The vast majority of these requests (around 50%) dealt with online financial fraud and what are referred to as “419” emails, that is, advance fee scams where mass emails are sent asking for money and financial assistance.

The Pune cyber cell is involved in two types of activities—criminal investigation post an event as well as active monitoring of information through ISP based monitoring, website content filtering, keyword filtering and user identification, both of which could lead to requests to block or delete posts and content.[xxi] Post the events of 2014, a bulk of the cyber cell’s work has been to monitor information—to track persons actively using hate speech through key words, tracking who they are friends with online, what groups they are on, the other people who are on these groups, and other publicly available information such as their public Facebook posts. For this, they use a method they term  link analysis. As of August 2015, the police continued to have a presence on Whatsapp groups, and citizens were still able to post messages with the police, giving urls of offensive or objectionable content .The Cyber Cell continued to actively collaborate with IT companies and cyber specialists.

In March 2016, the Pune police inaugurated a Social Media Monitoring Laboratory that is meant to monitor ”unlawful activity” on social media sites such as Facebook, Twitter and YouTube (Times of India 2016). As per newspaper reports, the lab would have 18 police personnel working in shifts round the clock to identify hate speech and take prompt action before complaints are received from the public. They will also respond to complaints from the government and the public (Times of India 2016).

The Pune social media monitoring laboratory is modeled on existing social media labs in Mumbai and in Uttar Pradesh at Meerut and Lucknow. The Mumbai Social Media Lab (MSML) was set up in 2013 in collaboration with the National Association of Software and Services Companies (NASSCOM), the Data Security Council of India (DSCI), and Reliance (Hindustan Times 2014). Set up in 2004, and inaugurated by Bollywood star Abhishek Bachchan, the MSML was set up as an immediate response to the massive mobilisation of protestors as well as public anger around the December 2012 Jyoti Singh Pandey rape incident, much of which occurred on social media. While the Mumbai police have publicly stated that the MSML would only monitor material that would be available in the public domain (Hindustan Times 2016), these technological developments in police surveillance, and the use of these methods by states and the centre[xxii] are bound to attract ethical and legal scrutiny (Datta 2015).

The setting up of these labs marks an institutional move where the police are increasingly and actively monitoring content on social media to prevent and predict events , rather than merely investigating after an event. Part of the reason for this is that tracking material on social media is becoming more difficult, given the contentious issues around jurisdiction, and the ease with which a media file can move from one platform to another, enabled by the ubiquitous mobile phone. Audio files, videos, photographs and text move seamlessly between platforms such as Whatsapp, YouTube, Facebook, Twitter and Instagram, making it difficult for the police to track the origins of such material post an event. It is not surprising then that in the Pune riots, while 21 persons have been arrested for acts of violence, including the murder of Sheikh, no one has been identified or arrested for circulating objectionable material and hate speech.

Conclusion

The emergence of social media as a key site for the circulation of images, text and audio files, through internet enabled mobile phones throws up important questions for law and governance. The Sheikh bail order, with its warped logic, deals with a more traditional situation. The chain of events outlined by Justice Bhatkar in the bail order begins with a gathering where Dhananjay Desai and others make provocative speeches in a communally charged atmosphere. The accused, who were present at the meeting, go on a violent rampage, and murderSheikh. The assumption the judge makes is that the provocative speech, made in a highly charged atmosphere, influenced the listeners to perform a series of violent acts. In the case of communal violence in Pune of 2014, the Muzaffarnagar riots of 2013, the violence at Azad Maidan, Mumbai in 2012, and the exodus of persons from the North east from cities such as Bangalore and Pune in 2012, the police have charged, arrested or even identified those responsible for acts of violence or making provocative speeches, but have not been able to track dangerous speech circulating as videos, images or text to a definite source. The transnational flow of information, the ease of inter-platform exchange, and the speed and scale with which information travels, has meant that the police’s focus is shifting to limiting the circulation of dangerous speech on social media, and efforts at predicting violence by monitoring and tracking people online.

These characteristics of the media environment can be termed affordances as they make possible and sometimes encourage certain kinds of practices, even though they do not determine what practices will unfold (Boyd 2014). The media scholar Dana Boyd points to four affordances that influence the mediated environments created by social media: 1) persistence: the durability of online expressions and content; 2) visibility: the potential audience who can bear witness 3) spreadability: the ease with which content can be shared 4) searchability: the ability to find content (ibid).

The relatively low cost of smart phones, most of which are sold with Whatsapp and Facebook installed on them, the enormous popularity of both these platforms in India, the ease with which photographs, video and audio recordings can be made through mobile phones, the ability to morph and tinker with content, the ease of movement of media objects across social media platforms, the ability and impulse to forward material, and the potential of media objects to go viral online, blurs the  boundary between private and public speech. These affordances raise important questions for the law, including relooking at traditional notions and expectations of “private” and ”public” space, and how one understands proximity of speech acts to violence, which in turn has implications for the exercise of freedom of speech (Narrain, Sheshu 2016).

The SPF’s intelligent use of Facebook’s spam function in Pune to combat dangerous speech on social media is an example of how they leveraged the affordances available for their own goals. Not only did the Pune police collaborate with the SPF in this effort, it also used traditional means such as community interventions, counter propaganda, and efforts at educating the public to show restraint in circulating dangerous speech at the time of communal violence in the city in 2014.

Another example is the use of OTT platforms such as Whatsapp in the mobilisation of crowds, and in the circulation of dangerous speech. They are end to end encrypted and not ordinarily accessible to law enforcement, which has triggered a heavy handed approach from state governments. Where earlier, the police identified and blocked websites that they considered objectionable under the Blocking Rules of the IT Act, they are now increasingly resorting to banning the internet completely in geographically specified areas. According to statistics gathered by the Software Freedom Law Centre, there have been more than 60 incidents of internet bans in various parts of the country in the period between 2012 and 2017, most of which have been done through the use of section 144 of the Code of Criminal Procedure (CrPC), the provision related to unlawful assembly that is usually invoked by police to control and prevent riots, and other public order disturbances.[xxiii]

This trend towards internet bans reflects a global phenomenon. In July 2016, the United Nations Human Rights Council passed a resolution expressing concern about measures that were meant to disrupt access to, or the dissemination of information online.[xxiv] The resolution also stressed the importance of combating “advocacy of hatred that constituted incitement to discrimination or violence on the Internet.”[xxv] In Gujarat, the legality of the government’s decision to shut down the internet in many towns during the Patidar agitation in 2015 was challenged in the High Court.[xxvi] The petitioner, a law student, argued that the use of section 144 CrPC to shut down the internet was unconstitutional, and that the correct approach would have been to block offending sites using the blocking provisions under the IT Act. The Gujarat High Court upheld the legality of the internet ban holding that the ban was not a blanket one as it only shutdown mobile internet while allowing internet over broadband. The court held that the serious public order situation warranted such government action.

Another visible trend is the targeting of moderators of groups on Whatsapp, and other such OTT platforms. The police have arrested a number of such persons for posts on their group (Shubhomoy 2015). While this question has not been decided in courts when it comes to criminal cases, recently, in a civil defamation claim, the Delhi High Court held that an administrator of Telegram (an OTT platform like Whatsapp (Rashid 2016)or Google Groups could not be held responsible for defamatory statements made on the group.[xxvii]

In Kashmir, where restrictions on internet access and strict rules regarding mobile connections have been regularly enforced over the last five years, the state government announced in April 2016 that all Whatsapp groups in the Kashmir valley that shared news amongst users must register with district authorities. This was in response d to the violence triggered by allegations of army personnel having molested a minor girl in Handwara town in Kupwara district. Following this, the district magistrate of Kupwara appointed the additional district information officer as the nodal authority to register Whatsapp news groups, and to monitor their activities (Rashid 2016). The state government also announced that its employees would not be allowed to criticise and comment on official policies on Whatsapp, and that group administrators would be made responsible for rumours spread using Whatsapp (ibid). While it is unclear how feasible it is to implement the state government’s announcements, these developments alert us to the dense thicket of ethical, political and legal issues that these measures throw up.

The focus on limiting circulation of content on social media to tackle dangerous speech, rather than prosecuting such speech post publication has to do with the difficulty in tracking the content back to a single author or a source, and in tracking content across national borders. However, this shift raises troubling questions around the dangers of police profiling, breach of privacy, and the muzzling of dissent, democratic mobilisation and political speech. There needs to be more discussion on the safeguards and oversight mechanisms that are needed to ensure that such a shift does not lead to an abuse of police and state power.

In this article I have examined how a set of key actors like the police and civil society and social media platforms have responded to these change in the context of a series of violent incidents in Pune in 2014 that resulted in the death of Sheikh. Sheikh worked as an IT manager with a private firm in Pune, one of India’s leading hub for IT, start–ups and research institutions. The developments I have described in this paper are circumscribed by local and regional techno-social factors, including the availability of technical expertise in this domain, and the police’s infrastructure and ability to capitalise on this.

The use of the term dangerous speech is an acknowledgement of the relationship between speech and violence, while at the same time narrowing down the scope of speech to a much smaller subset of public speech, than, for example, the term hate speech which would cover a much wider range of speech that denigrates and perpetuates discrimination against people in a group. The stress on non-legal measures to tackle such speech outside of the criminal law, make this framework suitable to communal violence in India, since the penal provisions such as sections 153A and 295A of the Indian Penal Code that are meant to protect against incendiary speech, are often used to censor, harass and curtail free speech and expression. The focus of the dangerous speech framework on the various factors at play, particularly the means and medium of dissemination, lend to examining the rapid changes in the design and architecture of the media environment in India, particularly how the affordances that have shaped the mediated environment created by social media, have reconfigured the way in which media, law and technology interact.

References

Ansari, Mubarak (2014): “FB Post Shuts Down Pune”, Pune Mirror, 2 June,  http://punemirror.indiatimes.com/pune/crime/fb-post-shuts-down-pune/articleshow/35911896.cms?.

Benesch, Susan (2012): “Dangerous Speech: A Proposal to Prevent Group Violence”, 12 January, World Policy Institute, http://www.worldpolicy.org/sites/default/files/Dangerous%20Speech%20Guidelines%20Benesch%20January%202012.pdf.

“Driven by Rumours, Exodus of NE People Continues from Karnataka” (2012): Hindustan Times, 17 August, http://www.hindustantimes.com/india/driven-by-rumours-exodus-of-ne-peopl… “Special Train to Bangalore as North-eastern People Return” (2012), FirstPost, 2 September, http://www.firstpost.com/india/special-train-to-bangalore-as-north-eastern-people-return-440002.html.

“Indian Facebook Group Accused of Moral Policing for ‘Clean-up’ Drive” (2014): Hindustan Times, 17 Jun, http://www.hindustantimes.com/india/indian-facebook-group-accused-of-mor….

Katakam, Anupama (2014): “Stoking the Fire,” Frontline, 23 June 2014, http://www.frontline.in/social-issues/stoking-the-fire/article6141599.ece.

Maynard, Jonathan Leader and Susan Benesch (2016): “Dangerous Speech and Dangerous Ideology: An Integrated Model for Monitoring and Prevention,” Vol 9, No 3, pp 70-95.

“Mumbai Police to Track Social Media to Gauge Public Views” (2013): The Hindustan Times, 14 June, http://www.hindustantimes.com/delhi-news/mumbai-police-tracks-social-med…

“Muzaffarnagar Riots Panel Gives Clean Chit to Sangeet Som, Cilent on Akhilesh” (2016); The Indian Express, 6 March, http://indianexpress.com/article/india/india-news-india/intelligence-failure-led-to-2013-muzaffarnagar-riots-judicial-panel-report/.

L. Alcorn, Chauncey (2016): “Facebook Intensifies Its Battle against Online Hate Speech”, Fortune, 22 September, http://fortune.com/2016/09/22/facebook-hate-speech/.

Pranesh Prakash (2012): “Analysing Latest List of Blocked Sites”, Centre for Internet and Society, 22 Aug, http://cis-india.org/internet-governance/blog/analysing-blocked-sites-riots-communalism.

Raghuvanshi, Umesh (2015): “Speeches, Video by Sangeet Som, Rana, fuelled Muzaffarnagar riots’, The Hindustan Times, 14 Oct, http://www.hindustantimes.com/india/speeches-video-by-sangeet-som-rana-f….

Sikdar, Shubhomoy (2015): “If You Are a Whatsapp Group Admin Better be Careful,” Hindu, 12 Aug, http://www.thehindu.com/news/national/other-states/if-you-are-a-whatsapp….

“Whatsapp Group Admin Arrested for Objectionable Content” (2015), Hindu, 8 Oct, http://www.thehindu.com/news/national/other-states/whatsapp-group-admin-….

Singh, Vijaita (2016): “FB, Twitter, Google asked to set up India servers,” 18 Oct, http://www.thehindu.com/sci-tech/technology/internet/FB-Twitter-Google-asked-to-set-up-India-servers/article14224506.ece.

Rashid, Taufiq (2016): “Whatsapp Groups Sharing News in Kashmir Valley must Register: Govt,” Hindustan Timeshttp://www.hindustantimes.com/india/jammu-and-kashmir-whatsapp-groups-sp…

Ranjan, Amitav (2016): “Now, Govt Cyber Cell to Counter Negative News,” Indian Express, 23 February,http://indianexpress.com/article/india/india-news-india/now-govt-cyber-c….

Notes


[i] Vijay Gambhire v. The State of Maharashtra with Ranjeet Yadav v. The State of Maharashtra and Ajay Dilip Lalge v. The State of Maharashtra, order dated 12 January 2017

[ii] The author lived in Bangalore at the time of these incidents, and was part of an initiative called the North east Solidarity Forum, a network of civil society organizations that started a helpline to intervene on behalf of persons from the North East, as well as Tibetans, and others affected by the threats at the time.

[iii] The Blocking Rules under section 69A of the Information Technology Act require that persons and intermediaries hosting the content should have been notified and provided 48 hours to respond (under Rule 8 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules 2009). Under the emergency provision (Rule 9), the block issued has to be introduced before the “Committee for Examination of Request” within 48 hours, and the committee has to notify the persons and intermediaries hosting the content. See Prakash 2012.

[iv]  Interview with Anand Shinde, 9 Sep 2015.

[v] A group led by Ravi Ghate formed the Drought Help Group on Facebook. They held a press conference and invited people to join their efforts and got 50 volunteers. They did not deal with money directly but matched the requirements of affected people (pipelines, farm tanks, sanitation etc.) with donations. They did this using Whatsapp and Facebook.

[vi] See https://www.facebook.com/groups/SocialPeaceForce/

[vii] Interview with Anand Shinde, Pune, 9 Sep 2015

[viii] Interview with Anand Shinde, Pune, 9 Sep 2015

[ix] Interview with Anand Shinde, Pune, 9 Sep 2015

[x] Interviews with Dr. Sanjay Tungar, Cyber Crime Cell, Crime Branch, Pune, 9 September 2015 and with Manoj Patil, Deputy Commissioner of Police Pune, 10 September 2015

[xi] Interview with Manoj Patil, Deputy Commissioner of Police Pune, 10 September 2015

[xii] Ibid

[xiv] http://cis-india.org/internet-governance/blog/presentation-on-mlats.pdf, accessed on 21 February 2017

[xv] See https://www.justice.gov/sites/default/files/jmd/legacy/2014/07/13/mut-le….

[xvi] Speech by Chetan Krishnaswamy, Country Head, Public Policy, Google India, at the launch of Software Freedom Law Centre’s report “Online Harassment: A Form of Censorship,” 22 November 2016, New Delhi.

[xvii] See https://about.twitter.com/safety/council, accessed on 21 February 2017

[xviii] Speech by Chetan Krishnaswamy, Country Head, Public Policy, Google India, at the launch of Software Freedom Law Centre’s report “Online Harassment: A Form of Censorship,” 22 November 2016, New Delhi.

[xix] See https://govtrequests.facebook.com/country/India/2016-H1/.

[xx] Ibid

[xxi] Interview with Sanjay Tungar, Cyber Crime Cell, Crime Branch, Pune, 9 September 2015.

[xxii] The Central Government has proposed a National Media Analytics Centre (NMAC), which appears to have a much wider ambit than the social media labs discussed in this paper. Besides highlight “belligerent comments” on social media that could lead to public order disturbances and protests, the proposed NMAC will analyse blogs, social media posts and web portals of television channels and newspapers and help the government counter negative publicity, and factually correct “intentional canards”. See Ranjan (2016).

[xxiii] See Software Freedom Law Centre, “Internet Shutdowns in India Since 2012”, updated on 6 February 2017, http://sflc.in/wp-content/uploads/2016/04/InternetShutDowns_Feb7, 2017.pdf,. Also See Centre for Communication Governance, National Law University Delhi, Legally India, 14 July 2016, “Internet Shutdowns: An Update, http://www.legallyindia.com/blogs/internet-shutdowns-an-update.

[xxiv] See United Nations Human Rights Council Resolution on The Promotion, Protection and Enjoyment of Human Rights on the Internet, A/HRC/32/L.20, passed on 1 July 2016, https://www.article19.org/data/files/Internet_Statement_Adopted.pdf, accessed on 23 February 2017

[xxv] Ibid

[xxvi] Gaurav Sureshbhai Vyas v. State of Gujarat, Writ Petition (PIL) No. 191 of 2015

[xxvii] Ashish Bhalla v. Suresh Chawdhury & Ors., CS (OS) No. 188/2016, IA No. 4901/2016, IA No. 8988/2016, IA No. 9553/2016, IA No. 9554/2016 & IA No. 11830/2016, decided on 20 November 2016

Siddharth Narrain (siddharth.narrain@gmail.com) is a visiting faculty at the School of Law, Governance and Citizenship, Ambedkar University Delhi and Honorary Research Fellow with The Sarai Programme, Centre for the Study of Developing Societies, Delhi.Vol. 52, Issue No. 34, 26 Aug, 20172 4 August 2017

The article appeared in the www.epw.in/engage on 24 August 2019