When Conflict Goes Online: How T&S Systems Need to Adapt during Crises in the Global Majority

Barani Maung Maung

Community Member

Every year, the International Crisis Group (ICG) releases a watch list of ten countries or regions at risk of deadly conflict for the commencing year. In 2022, nine out of the ten places were in the Global Majority, countries and regions outside of the North-West. In 2023 it was eight, and only because one of the remaining two was a “global economic vulnerability”. This year, the pattern remains unchanged, with nine of the places in the Global Majority. The data stands, countries and regions in the Global Majority are not only more susceptible to crises, they are also where most of them take place. 

These crises get transmitted onto the virtual world, with larger social media platforms often serving as the battleground between opposing groups. However, a lot of the time these companies, with safety architectures designed with Global North assumptions, lack the cultural awareness and nuance to successfully manage crises in the Global Majority and subsequent harm. 

 

Things Move Fast and So Do Harm

In crises, things move fast and so do harm. Current trust and safety systems are built for stable democracies, which is often not the norm outside of the West. It can take hours, and even sometimes days before a harmful content is assessed against platforms’ guidelines. Social media platforms have been criticized for this delay in action in the case of Ethiopia’s escalating violence in 2021. Moreover, if the harm is novel, the current processes to update internal guidelines can take weeks. Much of these delays lie with internal decision-making structures. Regional specialists need to involve U.S. based teams, who do not have ground truth or rely on English-language international media to gather context, both of which tend to be scarce for hyperlocal crises. By the time the guideline is in effect across the platform, the landscape has shifted completely. Thus, though these lapses may be inconsequential in normal contexts, during crises they could result in tangible harm in the real world. After all, the person being accused as a rebel supporter does not have days to spare under an authoritarian regime. 

 

Anonymity = Safety

As violence erupts, public institutions disintegrate, and interpersonal relationships break down, the physical world is no longer safe during crises. This leads people to take great precautions to ensure they are untraceable online. For instance, in response to the Myanmar military coup in 2021, citizens rapidly adopted VPNs to evade surveillance. Following the coup, VPN demand in Myanmar increased by 7200% in a span of three days

Other techniques to evade surveillance include creating dummy accounts, citizens utilize these accounts to create and remain engaged in civic discourse whilst protecting their identity. However, these are also behaviors that trust and safety systems are trained to flag as suspicious or inauthentic, to identify fake bots and cull spam. Definitions and benchmarks of authenticity are rooted in Western centric perspectives. In the West, an account with a mononym, a fake profile picture (e.g. generic stock photo), uploads of repetitive content, and high following may be considered a fake account. However, in the Global Majority, a lot of people go by mononyms, and/or they use fake names and photos to maintain a sense of anonymity, in fear of repercussions from their wider community or government. And yet, these are the accounts that are mistakenly being closed down by AI classifiers, since the technology believes it to be “fake accounts” based on its limited data training and programming.

To fix these issues is not simple, because these definitions and frameworks are built into the foundations of the platforms themselves to be operable at scale. At its core, these AI trust and safety systems are not designed for flexibility, and can inadvertently block people from accessing the only safe space of communication during crises in the Global Majority. 

 

Vulnerable Groups

When community guidelines are being drafted and decided through the lens of Global North education and experiences, there will be gaps when the same guidelines are being applied in the Global Majority. Cursing or sexual accusations targeted towards women in the North-West may not be as brutal and extensive in its social ramifications compared to women in traditional, patriarchal societies.

In times of crises, vulnerable groups – women, ethnic minorities, activists, and journalists — will be disproportionately targeted, as was illustrated in Sudan and Myanmar. During high political episodes, platforms may be incentivized to allow sexual attacks against public figures, even if they belong to one or more of these vulnerable groups, to facilitate freedom of expression and civic discussions. However, though these exceptions may be fruitful in Western democratic contexts, they can often be disastrous (and even fatal) for women in the Global Majority. Particularly during election periods, or large scale protests, mere accusations of promiscuity against female candidates or activists can derail their campaigns. Thus, community guidelines and procedures created by those who lack the cultural knowledge will fail to combat online harm across regions. 

 

Is AI the Answer? 

Since human centered approaches are rarely scalable, platforms have been turning to AI to fill this gap. But as previously mentioned, AI is tricky, because it is only as effective as the programming of its developers and the data it has, and when it comes to the latter, quality data for many languages from the Global Majority is lacking. In other words, these languages are low-resourced. AI will have difficulty identifying and correctly enforcing on harmful content in low-resourced languages since it will not understand these languages well. For example, Harvard researcher Mona Elswah deemed AI content moderation in Arabic as “inconsistent moderation” largely because of limitations in the training data. And in times of crises, these AI inconsistencies can be especially dangerous.

Consider a scenario where people are sharing tips on how to mobilize online campaigns to subvert an oppressive party in a low-resourced language. To do so, they may start using phrases such as “destroy them” and/or “annihilate the enemy” to express their dissent. Though these posts are not inherently harmful, AI classifiers may believe it is and start automatically removing them from the platform. In this scenario, users are blocked from engaging in meaningful political debates, freedom of expression is thwarted, and this will ultimately setback democratic advancement for the country. There are real, tangible downstream consequences to AI errors in content moderation decisions in the context of crises.  

 

Moving Forward

To effectively combat harm during international crises, larger T&S organizations must invest in culturally relevant strategies and continuously work with external partners (CSOs, academia, regulators and democratic governments). Further, it is critical to hire T&S staff that have lived experience in Global Majority contexts and can advise on linguistic and cultural nuances, particularly during crises when access to external partners may be limited. Organizations are increasingly establishing crisis protocols that need to account for Global Majority nuances and adapt some degree of flexibility in decentralized decision-making. Only then, would the online world remain a virtual safe haven when the offline poses so many dangers.

Barani Maung Maung

Community Member

Barani Maung Maung is a tech policy and safety expert from Myanmar.