Earlier in August, the Oversight Board announced a case on communal violence in the Indian State of Odisha. The case is related to a video posted on Facebook in April 2023 depicting a procession of saffron-colored flags followed by stone pelting. The video caption reads “Sambalpur,” which is a town in Odisha, where communal violence broke out between Hindus and Muslims during the festival. These clashes were followed by arrests, curfew and suspension of internet services in Odisha.
Although Meta identified and removed the video under its Violence and Incitement policy, citing ongoing risk of Hindu and Muslim communal violence, they referred the case to the Board, stating the tensions between Meta’s values of “Voice” and “Safety,” and because of the context required to fully assess and appreciate the risk of harm posed by the video. Meta asked the Board to assess whether Meta’s decision to remove the content represents an appropriate balancing of Facebook’s values of “Privacy,” “Safety,” “Dignity,” and “Voice,” and whether it is consistent with international human rights standards.
Tech Global Institute members and advisors, Shahzeb Mahmood, Ujval Mohan and Sabhanaz Rashid Diya, responded to the Board’s request for public comments, in support of Meta’s decision to remove the content. In the submission, we provide further input referencing socio-political backdrop behind the treatment of ethnic and religious groups in South Asia, how a content in India has spillover adverse effects across South Asia, how Meta’s Violence and Incitement policy should treat content depicting scenes of communal violence and how to assess whether such content could contribute to offline violence.
We outline key takeaways below, where we indicate that social media platforms need to fundamentally re-evaluate how they apply weight to competing values to ensure safety for underrepresented groups. The full comments are available here.
Impact is more important than intent: Typically platform policies are grounded on the perceived intent of sharing a piece of content, which often misses on critical nuances of political, sociocultural and economic contexts within which it was distributed. As a result, even a seemingly straightforward piece of content can contribute to offline harm, particularly among historically marginalized groups. This is especially salient when assessing content shared as praise versus neutral or condemning contexts. Platforms may look at impact in some instances, however this is a more passive approach.
While there are free speech principles to consider, we argue that being deliberate about assessing content based on an impact-risk framework can more accurately capture the risk it poses. With reference to the case, we assess that a video content showing stone-pelting between two highly polarized religious groups could lead to further violence on the ground, irrespective of the intent with which it is shared.
Differential thresholds for violence based on power dynamics: Should the threshold of what constitutes as violence be applied unilaterally across all communities within society? We argue, especially when dealing with edge cases and escalations, there should be differential thresholds applied to violence against a majority vs. minority group within the construct of caste, gender identity, ethnicity, race, religion and other sociopolitical characteristics, as it would then appropriately capture the power dynamics and subsequent offline impact of online speech.
In this case, stone pelting at an alleged Hindu procession in India would be considered as violence, given the history of inter-religious clashes and stone pelting in the region. Within the same context, if it were a Muslim procession (who are a minority group in India), a much less severe action (e.g., loudly playing anti-Islamic sermons) should also be treated as violent, as it would be perceived as dehumanizing to Muslims and likely pose similar risks of inter-religious clashes.
Treat protected characteristics through the lens of the marginalized group: Meta describes protected characteristics as identifiable personal characteristics such gender identity, sexual identity, religious affiliation, caste, national origin, etc. Hate speech is defined as an attack on people on the basis of their protected characteristics, however, the definition falls short of accounting for historical inequities that exist within the binary construct of a protected characteristic. In other words, should attack on a man be treated the same way as attack on a woman? Should the same thresholds be applied to both groups without considering the political, social, economic, environmental, linguistic and normative inequities within a specific context that could result in more negative impact on the disadvantaged group?
We argue that when assessing attack on a protected characteristic, higher weight (i.e. more safeguards) should be provided to the historically disadvantaged group within a binary construct. In reference to Violence & Incitement policy that was applied for this case, we propose re-evaluating the definition and application of protected characteristics to ensure adequate safeguards are provided to different groups when balancing between voice and safety.
Treat attack on concepts on par with protected characteristics: Platform policies treat safety through the lens of an identifiable target, i.e. a person or group of persons. While there are First Amendment arguments to support this approach, in most parts of the world particularly the Global South, hate and violence are spread by attacking the concept or institution, as opposed to persons. In reference to the case, inter- religious and ethnic violence often break out in India, Bangladesh, Pakistan and Sri Lanka, among others, when a particular group shares allegedly direct and/or implied disrespectful or offensive “remarks” about another group’s values, practices and beliefs. Examples include a burnt Quran (interpreted as an attack on Islam) or a head covering on a Hindu deity (interpreted as an attack on Hinduism and/or Islam) that by extension, will be seen as an attack on Muslims and/or Hindus and would lead to retaliatory protests and violence.
We argue that platform policies should be expanded to capture the nuanced attack on concepts, institutions and practices with the same weight as attacks on persons given the risks of offline communal violence.
Submission: Oversight Board on Violence in the Indian State of Odisha
Founder and Executive Director
Earlier in August, the Oversight Board announced a case on communal violence in the Indian State of Odisha. The case is related to a video posted on Facebook in April 2023 depicting a procession of saffron-colored flags followed by stone pelting. The video caption reads “Sambalpur,” which is a town in Odisha, where communal violence broke out between Hindus and Muslims during the festival. These clashes were followed by arrests, curfew and suspension of internet services in Odisha.
Although Meta identified and removed the video under its Violence and Incitement policy, citing ongoing risk of Hindu and Muslim communal violence, they referred the case to the Board, stating the tensions between Meta’s values of “Voice” and “Safety,” and because of the context required to fully assess and appreciate the risk of harm posed by the video. Meta asked the Board to assess whether Meta’s decision to remove the content represents an appropriate balancing of Facebook’s values of “Privacy,” “Safety,” “Dignity,” and “Voice,” and whether it is consistent with international human rights standards.
Tech Global Institute members and advisors, Shahzeb Mahmood, Ujval Mohan and Sabhanaz Rashid Diya, responded to the Board’s request for public comments, in support of Meta’s decision to remove the content. In the submission, we provide further input referencing socio-political backdrop behind the treatment of ethnic and religious groups in South Asia, how a content in India has spillover adverse effects across South Asia, how Meta’s Violence and Incitement policy should treat content depicting scenes of communal violence and how to assess whether such content could contribute to offline violence.
We outline key takeaways below, where we indicate that social media platforms need to fundamentally re-evaluate how they apply weight to competing values to ensure safety for underrepresented groups. The full comments are available here.
Impact is more important than intent: Typically platform policies are grounded on the perceived intent of sharing a piece of content, which often misses on critical nuances of political, sociocultural and economic contexts within which it was distributed. As a result, even a seemingly straightforward piece of content can contribute to offline harm, particularly among historically marginalized groups. This is especially salient when assessing content shared as praise versus neutral or condemning contexts. Platforms may look at impact in some instances, however this is a more passive approach.
While there are free speech principles to consider, we argue that being deliberate about assessing content based on an impact-risk framework can more accurately capture the risk it poses. With reference to the case, we assess that a video content showing stone-pelting between two highly polarized religious groups could lead to further violence on the ground, irrespective of the intent with which it is shared.
Differential thresholds for violence based on power dynamics: Should the threshold of what constitutes as violence be applied unilaterally across all communities within society? We argue, especially when dealing with edge cases and escalations, there should be differential thresholds applied to violence against a majority vs. minority group within the construct of caste, gender identity, ethnicity, race, religion and other sociopolitical characteristics, as it would then appropriately capture the power dynamics and subsequent offline impact of online speech.
In this case, stone pelting at an alleged Hindu procession in India would be considered as violence, given the history of inter-religious clashes and stone pelting in the region. Within the same context, if it were a Muslim procession (who are a minority group in India), a much less severe action (e.g., loudly playing anti-Islamic sermons) should also be treated as violent, as it would be perceived as dehumanizing to Muslims and likely pose similar risks of inter-religious clashes.
Treat protected characteristics through the lens of the marginalized group: Meta describes protected characteristics as identifiable personal characteristics such gender identity, sexual identity, religious affiliation, caste, national origin, etc. Hate speech is defined as an attack on people on the basis of their protected characteristics, however, the definition falls short of accounting for historical inequities that exist within the binary construct of a protected characteristic. In other words, should attack on a man be treated the same way as attack on a woman? Should the same thresholds be applied to both groups without considering the political, social, economic, environmental, linguistic and normative inequities within a specific context that could result in more negative impact on the disadvantaged group?
We argue that when assessing attack on a protected characteristic, higher weight (i.e. more safeguards) should be provided to the historically disadvantaged group within a binary construct. In reference to Violence & Incitement policy that was applied for this case, we propose re-evaluating the definition and application of protected characteristics to ensure adequate safeguards are provided to different groups when balancing between voice and safety.
Treat attack on concepts on par with protected characteristics: Platform policies treat safety through the lens of an identifiable target, i.e. a person or group of persons. While there are First Amendment arguments to support this approach, in most parts of the world particularly the Global South, hate and violence are spread by attacking the concept or institution, as opposed to persons. In reference to the case, inter- religious and ethnic violence often break out in India, Bangladesh, Pakistan and Sri Lanka, among others, when a particular group shares allegedly direct and/or implied disrespectful or offensive “remarks” about another group’s values, practices and beliefs. Examples include a burnt Quran (interpreted as an attack on Islam) or a head covering on a Hindu deity (interpreted as an attack on Hinduism and/or Islam) that by extension, will be seen as an attack on Muslims and/or Hindus and would lead to retaliatory protests and violence.
We argue that platform policies should be expanded to capture the nuanced attack on concepts, institutions and practices with the same weight as attacks on persons given the risks of offline communal violence.
Founder and Executive Director
Sabhanaz Rashid Diya is the founding board director at Tech Global Institute and Senior Fellow at Centre for International Governance Innovation.
Related Resources
Research
Research
Research