India’s New Rules on Social Media Regulation — A Comparative Analysis vis-à-vis Other Major Democracies

Padmini Das
5 min readAug 16, 2022
social media

Recently, the Indian Government announced some compelling changes into the regulation of social media, OTT and digital media companies.

With the introduction of the draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, new layers of content regulation have been added to the mix, like disclosure of content originator information, compulsory removal of objectionable content within 36 hours, compliance by social media firms for security-related investigations etc.

Let us summarise these changes and compare them to their contemporary regulations prevalent in other countries around the world.

What Do the New Rules Specify?

  1. Information regarding the first originator of “mischievous” messages should be divulged by social media companies (with messaging services) when enquired by the Government.
  2. A grievance redressal mechanism should be established by companies to deal with user complaints which are to be registered within 24 hours and resolved within 15 days.
  3. Distinction between “regulator social media intermediary” and “significant social media intermediary” — The latter refers to those who have more than 5 million users and will have to abide by additional obligations (appoint 24x7 contact point for law enforcement, appoint chief compliance officer, publish biennial compliance reports).
  4. OTT platforms will have to self-classify their content into five age-based categories (U, U/A 7+, U/A 13+, U/A 16+, A), display content rating for all programming and implement parental locks.

These aren’t a new set of laws per se, rather new rules which have been notified under the IT Act, 2000 and will supersede the 2011 guidelines for internet intermediaries in case of a conflict. The 2011 guidelines and the new rules are similar in one aspect — in order to claim the “safe harbour protection” (or immunity from being liable for third-party actions on their platforms) under the IT Act, the social media intermediaries will have to establish an active culture of the due-diligence abidance required of them. Difference is, the new rules call for a level of diligence that conflicts directly with user-privacy norms by asking them to circumvent the encryption mechanisms.

Why the Sudden Increase in Oversight?

Social media companies like Facebook, WhatsApp and Twitter have had a history of regulatory defiance, especially when it comes to censorship and sharing of user information with the State.

Twitter’s recent standoff with the Indian Government over taking down “incendiary and baseless” hashtags related to the farmers protests has intensified this situation.

Similarly, WhatsApp’s reported “double standards” in rolling out different privacy and data sharing policies across Europe and India has also called for criticism from various quarters.

But what lies at the heart of the issue is the growing call to enforce stringent standards of regulation for Big Tech and their increasing role in influencing political debates, civil society issues and inadvertently (or not) fomenting aggression and violence amongst the public.

Let’s see how mature democracies around the world have rallied proactively towards curbing this heightened influence.

social media user base

New Zealand

Following the live-streaming of the Christchurch shootings on Facebook, there was a rally cry from the public to prevent instances of online violence in the country. This led to the signing of the “Christchurch Call” in May 2019, an agreement signed by 48 countries so far (India included) as well as representatives from eight different tech companies (including Amazon, Google, Microsoft, YouTube, Facebook and Twitter).

Although this is a non-enforceable agreement, it helped shape the goals to control violent extremist content online with mutual cooperation between governments and private bodies.

Australia

Australia passed the Sharing of Abhorrent Violent Material Act in 2019, in which it introduced criminal penalties (upto 10% of their annual turnover) for tech companies and executives for their failure to prevent circulation of such content on their platforms.

The conversation on digital content regulation peaked in this country in 2015 after the suicide of a popular TV personality, Charlotte Dawson, who suffered from a cyber-bullying campaign on Twitter. This led to another legislation called Enhancing Online Safety Act, who made it compulsory for social media companies to take down harassing and abusive posts or face crippling criminal fines.

This is, of course, in addition to their global leadership role around structuring content monetisation arrangement between content creators and distributors — a framework that might be followed by several other nations — Australia recently passed the law to ensure tech giants such as Google and Facebook compensate news companies for the content created by them but distributed by these platforms.

UK

The UK has embarked on a concerted process to bring out regulations on online content through publication of a series of white papers, government briefs and legislative consultations in true deliberative fashion symbolising the British.

The Government is close to bringing out legislation that prohibits social media platforms that host user-generated content (Facebook, Instagram, Tiktok, Twitter etc.) from spreading content on sexual abuse, terrorism, suicide, cyber-bullying, pornography etc. The guiding motivation of these amendments is protection of children and young audiences from online harms.

Germany

This country has set a very high standard in social media regulation. Considering its historic strength on hate speech issues, given its past, it has developed a “NetzDG” law (aka the Facebook Act) in July 2019 which bans “manifestly unlawful” posts from platforms within 24 hours or incur fines up to €50m. It is a classic “take down, ask later” approach that has been considered necessary and effective in the crackdown of online extremism.

Although some believe it to have started a “draconian censorship regime” (partly due to co-opting of the law by non-democratic states like Russia for styming free speech) this law is special due to the delegation of legal issues to the tech firms instead of placing it under government scrutiny. The companies can choose what to keep and what to delete.

Canada

After the storming of the US Capitol Building in January this year, the Canadian government has pulled up its sleeves to tighten social media regulation. After reports from intelligence agencies surfaced in April 2019 regarding possible foreign interference in Canadian national elections, the country passed a law called the Elections Modernisation Act to close loopholes open to possible manipulation by foreign entities and miscreants on social media platforms.

content removal

Social Media Distancing

The Indian Government’s approach towards online content regulation currently mirrors along the lines of the UK, even though the latter is yet to codify said regulation. The proposed disclosure of the first originator of messages appears as a backdoor entry into the constitutional restrictions (national security and protection of sovereignty, for the win) on the right to privacy safeguarded via encryption.

In any case, a rushed process of notification via delegated legislation (Cabinet decision without parliamentary approval) makes the new rules seem more like regulatory retribution against Big Tech rather than an effort to minimise the ills of online violence.

(Originally published February 28th 2021 in transfin.in)

--

--

Padmini Das

Lawyer and policy professional. Passionate about international law and governance.