Print This Post
21 January 2021, Gateway House

Regulating Big Tech Intermediaries

Social media platforms such as Twitter and Facebook have come under global scrutiny in recent months following their use to incite or misinform the public. For years, governments around the democratic world have not taken the responsibility to adequately regulate these platforms. Now that may be changing – and it won’t be easy.

Former Senior Researcher, International Law Studies Programme

post image

Since 8 January 2021, when Twitter banned former U.S. President Donald Trump[1], the might of social media platforms has been on full display. Twitter was followed by Instagram, Facebook and Snapchat in suspending Trump’s accounts, and Parler, the equivalent of Twitter but without the content restrictions, was chastised by its hosting servers including Amazon, Google and Apple for not self-regulating content.[2]

Social media platforms, often referred to as intermediaries, have become powerful purveyors of content – and their regulation. They have billions of users[3], and multi-billion-dollar valuations, making them consumer and capital powerhouses.

Yet for years, governments around the democratic world have not taken the responsibility to adequately regulate these intermediaries. Now that may be changing – and it won’t be easy.

Globally, social media companies are protected by the ‘safe harbour provision’. This protects the intermediary, say Twitter or Google, from being penalized for harmful or unlawful content on its platform, if it is not created or modified by it, or if the platform did not have knowledge of such content posted by a user. The liability lies with the content creator/ user, an individual or entity. In recent years, the Indian judiciary has clarified ambiguous provisions[4] relating to the liability of intermediaries to take down unlawful content keeping in mind the fundamental right to freedom of expression of users.[5]

The U.S. offers similar protection to internet companies through Section 230 of the Communications Decency Act[6]. In Europe, the e-Commerce Directive 2000[7], the foundational legislation on internet providers, provides protection to internet intermediaries if they act only as a conduit and do not have actual knowledge of unlawful content.

In India, the Ministry of Electronics and Information Technology proposed amendments to the Intermediary Guidelines[8] in 2018 to include mandatory use of technology such as machine learning in content moderation and data disclosures to the government. These are still under review as the government seeks to align it with the pending Personal Data Protection Bill.

Europe and Australia have been first off the mark in trying to effectively regulate intermediaries. In December 2020, Europe, building on its e-Commerce Directive, introduced a well-drafted and comprehensive Digital Services Act[9] for handling of online content, liability of intermediaries and diligence requirements and protection of fundamental rights of individuals. Obligations of intermediaries include timely notification to law enforcement agencies in case of illegal content, content takedown obligations, transparency disclosures such as details of account suspensions and content removals, rules on digital advertising, appointment of compliance officers and conducting annual audits. These rules await adoption by the European Parliament and the Council of the European Union.

Australia incorporated stricter rules after the Christchurch terrorist attack, where the perpetrators posted videos of the attack on social media to glorify or incite violence. The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act, 2019[10]  mandates social media platforms to expeditiously remove violent content and imposes a large penalty in case of non-compliance – 10% of the annual turnover of the company.

Otherwise, country-specific and global rules on intermediary liability or content takedown regulations are largely absent, and social media companies have been self-regulating. In 2017, Facebook, Twitter, Microsoft and YouTube established the Global Internet Forum to Counter Terrorism (GIFCT), to share knowledge across platforms, setting standards and providing guidelines.[11] This is a good step but insufficient for general content regulation where the power of enforcement and redressal is required for affected users and in the public interest, given jurisdictional complexities and the vast scope of content.

India, at least, can make some decisions on this issue. As the Ministry of Electronics and Information Technology works on revising the Intermediary Guidelines, it can consider following the guiding principles of Transparency, Accountability and Grievance Redressal or TAG. This assumes increasing importance in the current and post-COVID era with intermediaries becoming ever more central to daily communications and knowledge-sharing:

  1. Transparency: Each social media intermediary to disclose, in a timely manner, the process followed in moderating content, technology applied, categorisation of content between lawful and unlawful, and taking down of content;
  2. Accountability: Make the principle of ‘duty of care’ central – i.e., intermediaries be made responsible by imposing positive obligations on them to prevent users from harming others. Here, it is important to balance accountability with providing adequate immunity to intermediaries through clear safe harbour provisions;
  3. Grievance Redressal: An independent judicial body should be assigned for grievance redressal and dispute resolution along with provisions for following the due process of law.

The government may consider emulating the European classification of intermediaries, which segregates social media platforms into a sub-heading, ‘online platforms’ with separate rules.

Globally, because internet giants have porous territorial boundaries, the G20 Digital Economy Taskforce can be the platform for developed and developing nations to share the challenges in their home countries. They can also share best practices to create global standards and guidelines for liability of social media intermediaries. Policy makers must draft these with adequate leeway given the on-going evolution of domestic digital laws and that what may be unlawful or illegal in one jurisdiction may not be so in another.

A change in the status quo is urgently needed. This can only be effected with the active participation and deliberation of all stakeholders: tech companies, civil society, academia and governments. Together, they can create the necessary balance between controlling misinformation/ unlawful content and protection of citizen rights including freedom of speech.

Ambika Khanna is Senior Researcher, International Law Studies Programme, Gateway House.

This article was exclusively written by Gateway House: Indian Council on Global Relations. You can read exclusive content here.

For interview requests with the author, please contact outreach@gatewayhouse.in.

© Copyright 2021 Gateway House: Indian Council on Global Relations. All rights reserved. Any unauthorized copying or reproduction is strictly prohibited.

References 

[1]Twitter. “Permanent Suspension of @RealDonaldTrump.” https://blog.twitter.com/en_us/topics/company/2020/suspension.html. (Accessed January 21, 2021)

[2] Nicas, Jack, and Davey Alba. “Amazon, Apple and Google Cut Off Parler, an App That Drew Trump Supporters.” The New York Times, January 10, 2021. https://www.nytimes.com/2021/01/09/technology/apple-google-parler.html.

[3] As an example, Facebook has 2.7 billion monthly active users, Twitter has 152 million daily active users.

[4] Section 79 of the Information Technology Act read with the Intermediary Guidelines 2011.

[5] Shreya Singhal v Union of India; Google India v. Visakha Industries.

[6] Columbia, University. “COMMUNICATIONS DECENCY ACT, 47 U.S.C. §230 .” http://www.columbia.edu/~mr2651/ecommerce3/2nd/statutes/CommunicationsDecencyAct.pdf

(Accessed January 21, 2021).

[7] European Law Union. “Lex Access to European Union Law.” EUR. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A32000L0031. (Accessed January 21, 2021)

[8] Section 79 of the Information Technology Act 2000[8] and the Intermediary Guidelines 2011 are the fundamental legislations for intermediary regulation. This law broadly outlines the obligations of intermediaries for due diligence including content takedown guidelines. But it lacks clarity and imposes limited obligations on intermediaries, allowing social media platforms to continue to be misused and misinform.

[9] European Commission. “The Digital Services Act: Ensuring a Safe and Accountable Online Environment.” European Commission – European Commission,. https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/digital-services-act-ensuring-safe-and-accountable-online-environment_en. (January 5, 2021)

[10] Australian Government. Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019. https://www.legislation.gov.au/Details/C2019A00038. (Accessed January 21, 2021)

[11] Microsoft Corporate Blogs. “Global Internet Forum to Counter Terrorism Has First Meeting Aug. 1.” Microsoft On the Issues, August 1, 2017. https://blogs.microsoft.com/on-the-issues/2017/07/31/global-internet-forum-counter-terrorism-first-meeting-aug-1/.

TAGGED UNDER: , , ,