The Pulse

How to Better Protect Free Speech in Indian Cyberspace

Recent Features

The Pulse | Politics | South Asia

How to Better Protect Free Speech in Indian Cyberspace

Online platforms that operate in India need to be regulated to ensure free and fair elections and protect data privacy and free speech

How to Better Protect Free Speech in Indian Cyberspace
Credit: Depositphotos

The digital world can be a dangerous place, particularly in India with its increasingly tech-based economy. The actions and inactions of companies which facilitate our use of the internet — from internet search providers to search engines and social media platforms — can trigger a wide range of harms, including identity theft, privacy violations, and misinformation.

Certain harms merit more attention from regulators. These are harms that directly impair, subvert or compromise Indian constitutional values such as freedom of speech and expression, equality, and democracy. A company’s capacity to trigger such harms is directly related to what they do and how big they are.

For example, companies supplying internet access through mobile or broadband networks have a limited capacity to trigger misinformation and disinformation, given they don’t directly publish content, unlike Google, Facebook, and X that do.

Threats to Free Speech and Equity

There are various types of harms.

There are cybercrimes such as identity theft, child pornography, and copyright violations. Most of these fall under the category of private harms. These are addressed through criminal sanctions, and usually policing is focused on individuals indulging in such crimes. But regulators have also started to focus on platforms’ obligations to act in addressing such crimes, failing which criminal actions against them have been initiated. The arrest of Pavel Durov, the CEO of encrypted messaging app Telegram, is a clear example of this.

Another kind of harm is anti-competitive practices, where platforms that are also producers of goods and services may stop competing firms from entering the market. Indian authorities are trying to stop such behavior by fining large platforms such as Google.

Yet another kind of harm relates to privacy violations, leading to individual and collective discrimination. Most social media platforms are free to use but get access to users’ data in exchange. This data is monetized by such platforms to all kinds of companies. It can then be used by public and private actors to make decisions that may result in price discrimination, denial of access to goods and services and credit, and loss of employment opportunities. 

Recently, in the United Kingdom, revoking a job offer based on a potential candidate’s social media posts was found discriminatory by the country’s employment tribunal.

According to a study, personalization based on user preferences by e-commerce platforms has resulted in widespread price discrimination for users. Hence, such discrimination classifies as a public harm.  

Finally, there are harms that include deliberate online falsehoods, which misinform, misguide, and incite negative social action including violence and censorship. For instance, voters can be influenced through false information.

In 2019, the Internet and Mobile Association of India announced a voluntary code of ethics that platforms adopted to regulate online content. The code was developed in response to challenges highlighted by the Election Commission of India including maintaining transparency in political advertisements. It was adhered to during the general elections held earlier this year. 

Making Online Platforms Accountable

Many of these problems stem from digital platforms having long operated on the legal principle of “safe harbor.” This means they are not liable for actions triggered by their users.

For instance, if Google’s Chrome browser was used to illegally access copyrighted material or X was used to issue hoax bomb threats, both Google and X would be exempt from liability.

This is applicable when the platforms have not initiated the transmission, selected the receiver, or modified content. However, this exemption requires that platforms undertake due diligence as prescribed by law, including responding to courts or enforcement agencies swiftly.

Here, the logic is that the architecture of the platforms does not allow for content to be checked before it is published, unlike what occurs with a traditional publisher. However, given that platforms such as X and Facebook profit from the interactions of all their users, including those whose actions cause harm, they should share liability too.

Some harms can be linked directly to the platforms and the liability should lie solely with them. For instance, if Google prioritizes its own apps over those developed by others, it creates entry barriers for competitors and stifles competition. Lawmakers are realizing this and pushing for legal accountability

In India, the rules regulating information technology have expanded due diligence obligations for social media platforms. Platforms such as Facebook with large user bases have additional legal obligations, including appointing a chief compliance officer and publishing compliance reports.

Constitutional Harms

While it is difficult to regulate content prior to publication, platforms can still regulate content afterwards. Mostly, they choose not to. This is because falsehoods spread faster, capture user attention, and hence drive engagement. Platforms may also abuse their regulatory capacity to curb online speech and expression. This is usually done at the behest of the government.

Recently, the Bombay High Court prevented such an attempt by striking down an amendment to the law that would result in the establishment of a government fact check unit to identify “false, fake, and misleading” information about the “business of the government.” If the amendment had passed, the government would have had powers to compel platforms to remove information that it found inconvenient.

The amendment was declared unconstitutional on the grounds that it would lead to censorship and have a chilling effect on free speech. This is why such harms need to be categorized as constitutional harms — as they directly impair, subvert, and compromise constitutional values such as the right to freedom of speech.

It’s not the first time such an attempt has been made. 

Recently, Indians have witnessed an unprecedented expansion of the state’s regulatory powers purportedly to address harms such as cybercrimes and misinformation. But the grounds for exercising such powers are too broad and may end up threatening free speech, consequently fueling further constitutional harms.

Protecting User Data and Users

The conduct of platforms is critical to ensure constitutional values are protected in cyberspace; thus the services of these platforms ought to be regulated more robustly. Ways of doing this could include requiring them to publicly disclose information such as internal company policies on prioritizing search traffic based on advertisements, content moderation and blocking, and government requests for content moderation. 

There could also be an absolute prohibition on the collection of sensitive personal information by platforms, given it may lead to community-wide discrimination.

Governments could also refrain from passing laws that might lead to constitutional harms. To ensure this, the judiciary has to develop a standard of review to assess such laws.

Currently, the standard of review is the proportionality test. This means that if the state takes an action that restricts a fundamental right, it must be balanced against the goal it seeks to achieve. For instance, the benefits of a biometrics-enabled identity card for access to government subsidies should be weighed against the threat to privacy. This mechanism is inadequate, since it not only presumes the primacy of the state’s public policy objective, but also provides it a wide discretion in the choice of tools.

Originally published under Creative Commons by 360info™.

Dreaming of a career in the Asia-Pacific?
Try The Diplomat's jobs board.
Find your Asia-Pacific job