2023 Annual Letter: Elections, AI for the Global Majority and Our Plans

Sabhanaz Rashid Diya

Founder and Executive Director

Last week, I sent out our inaugural Annual Letter to our board, staff and our network. We’ve had a busy 2023, but that’s just the beginning.

But before I get to all of that, I want to celebrate our community—staff, fellows, members and advisors.

After a year behind the scenes, Tech Global Institute officially launched in mid-2023, growing to a team 16 with an additional 70+ community contributors. As a global tech policy think tank, our mission is to advance equity for the Global Majority on the Internet. Specializing in Internet Governance, Trust & Safety and Responsible AI, we lean on our diverse, interdisciplinary brain trust to challenge assumptions shaping modern Internet rules. We just launched a Global Majority tech policy fellowship program.

Our network, spanning 26 cities worldwide, comprises of technologists, policy specialists, researchers and integrity workers with decades of experience in tech companies in Global Majority contexts, as well as human rights lawyers, grassroots activists and former bureaucrats at the forefront of shaping Internet legislations.

This collaboration of Global Majority tech experts to tackle historic inequities is in itself no easy feat… for several reasons:

Our accountability work is deeply intentional, addressing firsthand disparities experienced by our communities. I am grateful to everyone who believes in our mission. There is a lot at stake and we are ramping up for the multi-year, multi-stakeholder work ahead.

We’ve just witnessed the first election in 2024. AI repeatedly came up. But there’s more to democracy than dis/misinformation.

Bangladesh, my birth country, held its 12th parliamentary election last weekend, marking the first in a year with national elections in at least 64 countries. The lead-up underscored the crucial role of geopolitics in national polls, seen online and offline. The U.S., U.K., EU and Canada criticized the election’s fairness, while China, Russia, Japan and India congratulated the incumbent on their fourth term. Public opinion on democracy has never been so starkly polarized.

Our research highlights social media’s role in amplifying polarization that we anticipate seeing in other Global Majority elections. Our work on AI-generated disinformation was featured on The Financial Times. We spoke to Thomson Reuters Foundation about common risks emerging across countries.

  • The real danger of AI in election is not disinformation, but how AI erodes public trust. Politicians shift accountability by attributing uncomfortable truths to AI, leaving journalists and civil society with limited resources to prove otherwise.
  • The narrow focus on true/false information neglects a common threat to election integrity: facts presented in a distorted context. Online campaigns often exploit geopolitical events such as Ukraine and Gaza—tied to strains in U.S. relations with global leaders—contributing to polarization in Global Majority communities.
  • Political ads are a hotly debated topic, but consist of a smaller share of online political engagement, especially in countries facing a severe dollar crisis. Complex networks of actors with undisclosed partisan affiliation spread organic hyper-partisan content.
  • The spotlight on the role of GenAI in elections sidelines a longstanding issue: cheapfakes. Low cost editing and voice cloning tools fuel distrust in the electoral processes, target minority candidates and discredit human rights defenders. We outlined platform policy inadequacies in addressing a wide range of digitally altered media. 

Our fellows and members have spent 7+ months working on a comprehensive, multi-country study that dives into platform accountability in Global Majority elections. We’re planning to launch this in a few weeks.

Tech accountability is complex. Regulations aren’t a silver bullet for everyone, everywhere.

I recently discussed the contagion effect of the EU’s AI Act with France24 alongside MEP Karen Melchior. I emphasized the need to consider the specific political and socio-historical contexts in which laws operate. Transplanting laws across borders without deconstructing and contextualizing its underlying principles pose serious human rights risks. The global rush to pass AI laws is concerning, particularly when India and China, proposing to criminalize deepfakes, are influencing the Global Majority. Similar challenges were seen with prior privacy and content laws, like Germany’s NetzDG, which was borrowed by 13 countries to legitimize censorship laws.

Last year, we presented these arguments at the UN General Assembly and the Internet Governance Forum. I spoke at panels at All Tech Is Human’s Responsible Tech Summit and The Royal Society, urging policymakers to avoid a one-size-fits-all approach to regulating AI, a mistake borrowed from how countries approached social media laws. In an op-ed for TechPolicyPress, we similarly expressed concerns about the UN’s multilateral process under the Global Digital Compact.

From Bogota to London to Nairobi to Colombo, we hosted nine worldwide Tech Policy Circles to reimagine tech accountability for the Global Majority. Communities fear abuse of regulations to surveil and censor unpopular opinions. Our upcoming report will delve into insights from these discussions, offering much needed nuance on the double-edged sword of transparency and regulatory accountability.

Safety and innovation are interdependent. Building responsible tech needs consistent and intentional investment, keeping historically underserved groups at the center.

In our submissions to the Oversight Board and policy briefs, we argue that international human rights principles provide a more universally applicable framework for online speech governance. At the same time, inconsistencies persist in defining terms like hate speech that requires alternate frameworks like the Rabat Plan of Action to balance speech and safety considerations.

Platform inequities faced by Global Majority communities stem from historical treatment of speech and structural disparities. In submissions to the White House, the Australian and Canadian governments, and during closed-door policy talks with governments in over 20 countries, we pushed for pluralism and multistakeholderism in designing, developing, deploying and governing technologies, including AI. We emphasized inclusion of diverse indigenous, minority and Global Majority voices, as well as human rights impact audits across the product lifecycle, focusing on differential effects on marginalized groups.

Our research finds under-investments in non-English tools at platforms results in blunt enforcement, undermining human rights, particularly exacerbated during conflicts. We spoke with WIRED on the Oversight Board’s first cases under expedited review, underscoring insufficient platform investment in longstanding conflicts outside the U.S. We emphasized the broader applicability of the Board’s decisions to violent events in Sudan, Democratic Republic of Congo, Afghanistan, Armenia and Myanmar. Platforms need to implement a consistent and transparent policy framework for conflict response that adheres to both international human rights and humanitarian laws.

Global Majority policymakers are rightfully excited about the possibility of the Internet and AI in tackling health, agriculture and economic challenges. We share their optimism but argue that good innovation is underpinned in good governance. In 2024, we plan to continue our work with policymakers, civil society and technologists so they deliberately incorporate consumer protection, privacy safeguards and human rights to ensure solutions can benefit underserved groups.

We are at the cusps of a tipping point in determining how technology will shape society. This means we have our work cut out for us. And I cannot wait to see what we build together.

Sabhanaz Rashid Diya

Founder and Executive Director

Sabhanaz Rashid Diya is the founding board director at Tech Global Institute and Senior Fellow at Centre for International Governance Innovation.