THE SOCIAL MEDIA TRIBUNAL VERDICT
This tribunal relates to alleged violations of human rights and other serious harm caused to people around the world by social media companies such as Facebook, X, formerly Twitter, YouTube, Discord, Snapchat, Instagram, TikTok, Reddit and many others I haven't named.
We begin by noting, as pointed out somewhat by defense counsel today, that social media, as a part of the digital age in which we live, are globally influencing the life of billions of people all over the world. Social media could be an important tool to support freedom, peace and human rights. However, because most platforms are dominated by large commercial enterprises and their obvious interest in increasing profits as well as having economic and political power, these platforms often fail to serve the important interest of supporting freedom, peace and human rights.
Accordingly, we will conclude our findings of fact and conclusions of law by respectfully making a number of recommendations, which we believe will improve the ecosystem in which social media operates so these interests are protected and furthered.
Having heard the testimony of many fact witnesses who testified regarding the harms they or their family members suffered due to the alleged conduct of social media companies, as well as testimony from experts in many fields related to the impact of social media on adults and children throughout the world, and having heard arguments from highly qualified and highly respected and experienced counsel for the prosecution and for the defense, the Tribunal will now make what we call preliminary findings of fact, certain conclusions of Law, and we will issue specific recommendations that we urge the social media companies to adopt.
This tribunal is governed by the Statute of the Court of the Citizens of the World. We rely on some of those rules in particular that are found in the statute. Specifically, rule 18 provides, in pertinent part for this Court's jurisdiction. And I quote, the court shall possess global jurisdiction over individuals in their personal or professional capacities, corporations and any other legal or natural entities, regardless of their domicile, nationality or governing authority. Yet the Court's authority shall be limited to the analysis and evaluation of evidence serving as a means to render an impartial judgment on alleged human rights violations. The jurisdiction of the court shall encompass human rights violations under consideration in this case, the jurisdiction of the court shall encompass human rights provisions contained in the Rome Statute, international human rights conventions, the general principles of international law and customary international law that closes the quote of rule 18.
Rule 30 provides, in pertinent part as follows, quote: The standard of proof utilized in the court proceedings considering allegations of human rights violations shall meet the threshold of reasonable grounds to Believe. In particular, the court has applied the law conventions and guiding principles of the following: the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, the European Convention on Human Rights, the UN guiding principles on Business and Human Rights, and the Digital Services Act of the European Union.
And now I begin our findings of fact, which I number:
Number one, based on the testimony of many witnesses, we find that there are reasonable grounds to believe that the social media companies knowingly and intentionally allowed and/or failed to remove unlawful or inappropriate content, including but not limited to hate speech, incitement to commit violence against people based on their race, ethnicity, gender, national origin, also sexually explicit material targeted to children, violent acts involving children and extortion schemes based on invasions of personal privacy.
We reject the defense that social media platforms were justified in allowing such materials to be available because of the concept of freedom of speech. Speech that incites or causes violence that tramples on human rights such as the right to privacy and the right to protect one's own reputation violates the rule of law because it causes harm and is outside the bounds of ethical, legal and humanitarian norms. It undermines the principles of respect, dignity and peaceful coexistence that are essential to human rights.
We find that there are reasonable grounds to believe that the social media platforms are not proactive or reactive in ensuring that content posted on their platforms by users complies with their own policies and the principles governing human rights. This failure is a direct result, we find, of the social media platforms' desire for increased profits rather than protecting the rights of their users. Advertising is the way in which social media obtain their revenue and their profit. The evidence showed that the more the user accesses the platform and the longer the user remains on the platform, the more the platform will benefit by users noticing the ads, looking at the contents of the ads, and potentially purchasing an advertised product. But all three of those activities generate revenue to the platform.
Number two, the EU's Digital Services Act adopted in 2022 requires social media companies to undertake content moderation of material posted on their platforms. Similarly, in India, the IT Act of 2000 requires social media companies to conduct due diligence, including reporting cyber security instances to a computer emergency response team and they must take down content upon receiving notice of a court order or direction from a government agency. The Indian Protection of Children from Sexual Offenses Act, called POCSO of 2012, mandates that social media companies report criminal or inappropriate content, such as child sexual abuse material, to law enforcement agencies.
While neither the DSA nor Indian law govern all of the world's jurisdictions, we find that there are reasonable grounds to believe that the failure to review content either prior to publication or surely after receiving notice that certain content is dangerous and inappropriate is grossly negligent and reckless.
We also find that social media platforms have the technology to review content by using artificial intelligence and/or by using highly trained human reviewers, and that the cost to do so would not be prohibitive. We also find that social media platforms undoubtedly know that some of the content displayed on their platforms is dangerous and inappropriate, often leading to adverse effects on the safety, mental health and well-being of their users. Nonetheless, the preponderance of the credible evidence shows that despite that knowledge, the social media platforms often permit this material to be published and/or decline to promptly take it down when informed by users of the dangers posed by the continued access to this information.
Number three, we find that there are reasonable grounds to believe that the social media platforms not only violate the provisions of the laws, guiding principles and conventions cited above, but also knowingly and intentionally violate their own Terms of Use and Privacy policies, which explicitly prohibit the publication of hate speech, incitement to violence, threats of violence, extortion and other impermissible conduct content.
This is circumstantial evidence, at the very least, that the social media platforms knowingly and intentionally are permitting their platforms to be used by bad actors who have committed cyber crimes and/or have interfered in elections by disseminating false or misleading information and/or have been complicit in encouraging genocide or the involuntary transfer of population.
Number four, we briefly provide only a few examples at this time in support of this finding:
- Social media was used in Myanmar to encourage sexual violence assaults on the minority Muslim Rohingya population, causing hundreds of thousands to flee the country. Posts such as, quote, "All Rohingya must be killed," quote, "Rohingya are vermin that must be eliminated," quote, "Rohingya must be driven out of the country." Those types of comments, posts were allowed to appear on a social media platform.
- Posts were published on social media platforms also that suggest that children were ugly and were hated by everyone.
- Other posts led children to engage in self-harm or encourage children to engage in sexually explicit acts, and then suggested that, having done so, they were threatened by what is called "sextortion." Sextortion perpetrators told them that if they did not pay money, videos or pictures of those acts would be widely disseminated, and they were told that their life was over anyway, and their only choice was to commit suicide.
- Even after a child did commit suicide, family members continued to be threatened with public exposure of the sexually explicit images.
Number five, we reject the defense contention that the social media platforms are not responsible for the behavior of third parties who are the ones committing cyber crimes or engaging in criminal activity such as extortion, cyberbullying or cyberstalking.
It is true that the social media platforms are not committing those crimes directly, but by permitting these third parties to publish material that is criminal or will lead to criminal conduct, and by failing to remove such material despite actual notice, the social media platforms are facilitating those crimes.
Number six, these types of social media content clearly violate human rights and freedom, such as the right to privacy, and they cause severe consequences to the victims. Victims of that conduct, including self-harm, psychological trauma and even death. By failing to act to prevent hate speech and online criminal activity, the social media platforms violate the human rights of their users, as specified in the law conventions and guidance cited earlier.
Number seven, we find that social media platforms have made no effort to protect children from harmful content. Moreover, we find that the social media platforms are well aware that children are endangered due to the absence of parental control and parental access to their children's social media accounts.
We also find that social media platforms have been complicit in knowingly causing children, through algorithmic recommendations, to become addicted to using social media. The testimony showed that many children spend as much as five hours a day on these platforms, even waking up repeatedly during the night to check their feeds.
As just one example, a witness testified that intimate images she had placed in a supposedly safe and private place were stolen from a social media platform despite being in a privacy setting and a "my eyes only" feature. The images were widely disseminated after the data theft, causing her great harm, such that she was forced to flee her country of residence.
And now I turn to our conclusions of law.
We conclude that the social media platforms, in general, have violated the following conventions, laws and guidance.
I begin with international human rights law, and I cite the UN Declaration of Human Rights, Articles 2, 3, and 12. In particular, I won't read the language of those, but those are the sections I cite.
We also cite the ICCPR, Article 20, paragraph 2, which prohibits incitement to hatred, discrimination or violence. Section 10.2 of that same statute—the ICCPR—mentions restrictions on freedom of expression to prevent incitement to hatred, and Article 14 prohibits discrimination. Again, I'm not reading the language of those sections, but we rely on those.
In addition, the United Nations Convention on the Rights of the Child, we cite in particular Article 16 of that convention that states that every child has the right to privacy.
We cite the Convention on the Elimination of All Forms of Racial Discrimination. Article 2 condemns discrimination against women in all its forms and condemns other inappropriate race discrimination.
We cite the United Nations guiding principles on business and human rights, in particular, Article 17.
And now I cite the OECD guidelines for multinational enterprises, Article 2, a.2, which does say that enterprises should take fully into account established policies in the countries in which they operate and consider the views of other stakeholders, and an enterprise should respect the internationally recognized human rights of those affected by their activities.
Article 17 of the ICCPR—I think I'm going over that again—says no one should be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honor and reputation.
And ECHR, Article 8 says everyone has the right to respect for his private and family life, his home and his correspondence.
In particular, we turn to failure to address cyberbullying and harmful content, including promotion of harmful challenges, revenge porn and self-harm. And we cite in particular Article 6, 7, 17, 19 and 20, paragraph 2 of the ICCPR.
We cite Articles 8 and 10 of the ECHR, which, just to pick the last one of those, says everyone has the right to freedom of expression.
We now turn to our recommendations, of which we have 17, but they're short.
1. Social media platforms must be held accountable for their actions by being exposed to civil penalties.
2. Social media platforms must act responsibly to respect the human rights of its users.
3. Social media platforms must accept the obligation to filter and screen the content of online speech to ensure that cyber crimes such as hate speech, criminal conduct and the spread of dangerous misinformation and disinformation is significantly reduced and soon eliminated.
4. Social media platforms must invest in and implement technical and manual measures to ensure that all artificial intelligence-based algorithms comply with global ethical standards and best practices.
5. Social media platforms must deploy technical tools and adequate skilled human resources to detect, prevent and mitigate unlawful content such as fake news, cyberbullying, hate speech and revenge porn on their platforms.
6. Social media platforms must adopt and enforce mechanisms to promptly address complaints, including immediately removing pages, posts and accounts that clearly violate the laws, conventions and guidance cited above. And in today's world, promptly must mean within 12 hours of a determination that such pages contain criminal or inappropriate content.
7. Social media platforms must be transparent as to the use of algorithms used to persuade users to take certain actions and should prevent addiction in children that lead to serious physical and psychological consequences.
8. Social media platforms must adopt and implement technology for verifying the age of children and provide parents with real-time access to their children's accounts. The platforms must also adopt a method for immediately flagging inappropriate content and notifying parents of that content.
9. Social media platforms must use AI and machine learning technology to monitor and detect illegal activity on their platform, such as deepfakes, grooming or cyberbullying.
10. Social media platforms must restrict the amount of time a child—and a child is someone under 18—can spend on a particular social media platform.
11. Social media platforms must immediately suspend the accounts of predators or groomers that target or threaten children.
12. Social media platforms must ensure that information collected from a user to fulfill notice and consent requirements must be prominently displayed to all users—and that means not buried in a five-page Terms of Use, but prominently displayed.
13. Social media platforms must ensure they deploy artificial intelligence and other resources to detect and terminate fake accounts created by bad actors to commit social engineering frauds and other dangerous and inappropriate activity on their platforms.
14. We also, most respectfully, recommend that UN conventions consider revisions and amendments in order to make social media platforms more transparent and to make them accountable for violations of human rights.
15. We also recommend that national governments consider and pass legislation to address the concerns raised during the hearings before this tribunal. Such legislation should provide for both civil and criminal penalties, depending on the nature of the violation of both national laws and international conventions. Such laws should explicitly prohibit the publication of pornographic material, posts aimed at sexual abuse, online challenges inciting suicide, encouraging suicide or self-harm, and protecting children by ensuring parental access to accounts of minors and implementing parental consent to protect their privacy. Such laws should require that the identity of users be verified and that age limitations be observed by implementing and enforcing specific age-gating practices, prohibiting underage users from accessing inappropriate and dangerous material.
We most respectfully recommend that a court or tribunal with jurisdiction consider the imposition of monetary penalties against Facebook for its conduct in furthering the persecution of the Rohingya in Myanmar. We also recommend that compensation be extended to the victims of this persecution.
16. Similarly, we respectfully also recommend that national courts or tribunals consider compensation to the victims of social media's violations of their human rights.
17. Finally, we most respectfully recommend that this tribunal reconvene one year from now to assess whether the social media platforms have adopted many of these important recommendations and have taken seriously their obligation to protect the human rights of their users.
This constitutes the unanimous oral judgment of this tribunal.
Judges:
• Hon. Shira A. Scheindlin - Appointed by President Bill Clinton as a US Federal Judge (Presiding Judge).
• Herta Däubler-Gmelin – Former German Justice Minister.
• Karnika Seth – Cyber law specialist who has practiced before the Supreme Court of India for the last 23 years.