We work at the nexus of balancing freedom of expression and online safety to safeguard vulnerable groups in low- and- middle-income regions.
Marginalized and underrepresented communities often face disproportionate harm, including online harassment, hate speech, and targeted misinformation on internet platforms. Content moderation policies can unintentionally amplify these challenges by failing to adequately address systemic biases, cultural nuances, and contextual understanding. Given the enormous volume of content on social media platforms, it is practically impossible to review each piece manually. There is over-reliance on automation and algorithms, but they struggle with nuance, leading to both over- and under-enforcement. Further, the policies themselves favor dominant voices and fail to address region or culture-specific challenges. The diverse regulatory landscape with varying approaches across different jurisdictions adds to the complexity. There is an urgent need to balance mitigating online harms while safeguarding freedom of expression that will require widespread and meaningful collaborations, particularly by putting underrepresented voices in the driver's seat.
Starting August, we will release a biweekly newsletter, rounding up what's happening in tech and Internet in Global South, why it matters and what you or we can do about it. We'll post about opportunities to collaborate or participate in policy consultations. If you're interested, sign up to join the waitlist.