Reforming AI Laws and Regulation in Bangladesh: Current Harms and Possible Future(s)

Salwa Hoque

Community Member

Bangladesh is among many countries thinking about how to incorporate AI laws and regulation to address and accommodate emerging technologies. From urban to rural spaces, most of us engage with AI to some extent, be it when using our smartphones, or when searching for content on the internet, or when engaging with social media platforms. Most families have at least one member (and usually more) who has access to such technologies, and it is worthwhile to think together on the laws and regulation that should govern AI systems.

The Bangladesh Information and Communication Technology (ICT) Division has developed a draft of the National Artificial Intelligence Policy 2024. This draft law states that its implementation is to ensure “the ethical application of AI as we move towards achieving a Smart Bangladesh by 2041,” and intended for the nation’s economic growth, societal progress, and national security. The AI Policy draft justifies and promotes data-driven policy, but it is worth asking: does data-driven policy lead to the well-being of society? There is a prevalent assumption in society that outputs provided by AI are “better” than human beings because they are 1) exhaustive (AI can parse through volumes of data quickly in ways that human beings cannot) 2) neutral (AI is not prejudiced like human beings are, that is, the outputs are not based on political, economic, or social incentives). While algorithms might not be biased in the same ways as human beings, they are embedded with particular values that are encoded by the designers of technology. There are many shortcomings of using AI systems for decision-making that we must discuss collaboratively to ensure our people are protected from the harms AI systems can cause (even if unintentionally).

In this article, I highlight the shortcomings of the current AI Policy draft. Even if this particular draft is revoked, the points made in this piece can contribute to thinking about the regulation of AI more broadly. I have been studying multiple drafts of the Data Protection Act since 2021 and have addressed many of the following concerns regarding AI systems in public seminars and internal discussions with local scholars, researchers in NGOs, policymakers, and journalists over the years. This piece is an organized collection of the concerns of leaving AI systems unchecked.

If the AI Policy gets enacted, it will be an addition to the already existing laws in place that regulate digital media and technology, such as Digital Security Act (DSA) and Information and Communication Technology Act (ICT). Recently, the DSA was replaced with the Cyber Security Act (CSA). The regulation of technology is meant to preserve the safety, privacy, and rights of the public and ensure that people are not exploited, discriminated against, or harmed by any individual, corporation, or the state. Yet, Bangladeshi journalists, minority rights activists, lawyers, feminists, human rights groups, and academics have brought attention to how digital laws are misused by the state as well as elite and powerful members of society to intimidate and silence the public, particularly those marginalized in society.

In August 2024, the former prime minister Sheikh Hasina was ousted from Bangladesh and the parliament was dissolved. This launched Bangladesh into a new era with an interim government in place until an election is held for the next government. This is an opportunity to reform the existing digital laws that aided in authoritarian rule and rethink the draft laws before they are implemented in society. For instance, the Forum for Freedom of Expression Bangladesh (FExB) has already requested the interim government to repeal The Cyber Security Act as its enactment is questionable. This law, instead of helping people, was exploited by the state to crackdown on those who were criticizing the government.

This piece is a call to the interim (and next) government of Bangladesh as well as to the public(s) of this region to be aware of the limitations of the current AI Policy draft. The goal is to foster dialogue and collectively work towards making laws that center on the well-being and protection of the people on the ground, paying attention to include marginalized and minority communities as well.

 

 

Examples of Discriminatory AI

The AI Policy draft claims that “AI systems will improve public service efficiency, ensure personalized service delivery, enhancing citizen-friendly services through automation and predictive processes” (Section 4.1). While the draft law focuses on how automation and predictive processes can be used, there is no mention of the protection of individuals and communities when AI provide decisions that can harm them. Currently AI systems are incorporated for various decision-making support. While most decisions are not autonomous yet, they still play a role in the initial scanning, filtering, and assessment of various cases. Far from being neutral, AI systems can reinforce dominant bias.

Here is an example from the U.S. that can be prevented in Bangladesh. Amazon shut down its artificial intelligence (AI) recruiting tool as it was found to discriminate against women. This is because the training data that was used to develop this model was based on the CVs that were sent to this company over the last ten years. Due to societal and gendered hierarchies, most of the job applicants in past years were men resulting in automated decisions that prioritized men over women. Employers using such hiring software were handed a shortlist of people with little to no women as options to choose from. In North America, AI systems often also reinscribe racial biases and discriminate against people of color, those with disabilities, and immigrants (See Wendy Chun’s Discriminating Data and Kate Crawford’s Atlas of AI). In other words, the results that AI generate are not neutral as they can reinforce preexisting offline biases.

I use the example of recruiting since the AI Policy draft can support AI hiring systems; Section 4.8.5 states, “AI systems will facilitate job matching and optimize employment opportunities through advanced algorithms and data analytics.” These laws are often written in vague and ambiguous ways, which leaves the scope for employers to use such technology while the people being discriminated against are not protected. In Bangladesh, (as well as most parts of the world) women still have a difficult time getting employed compared to men. Using AI will only increase this gap. Models can pick up the pattern of names (from primarily dominant groups) impacting which CVs get shortlisted. In the Bangladesh context, this can make it more difficult for the non-Muslim minority (with non-Muslim names) to get selected by such models. Hence, our AI laws should incorporate ways to combat these kinds of discrimination, which are prone to harm marginalized communities who are already living in the margins.

Another example that is particularly relevant for the Bangladesh context is using AI for marriage/divorce cases. Nowadays legal and tech-company actors state that technologies could

help assist lawyers and judges in various ways, such as determining the appropriate compensation amount for a case using the data in the system. For example, machine learning models can be used to predict a couple’s divorce alimony, as explored by Fabrice Muhlenbach and his research team who studied France. This will pose several problems in Bangladesh and South Asia more broadly. First, there are many marriages and divorces (particularly in rural Bangladesh) that are not registered with the state. In these cases, AI systems have no data in their system of these marriages, as though the non-digitized marriages do not exist. There are many marginalized and indigenous communities in Bangladesh where couples undergo customary marriages and do not formally register their marriage with the state. In addition, there are many barriers that prevent minority groups such as Rohingya refugees to register their birth, death, marriage, and divorce with the Bangladesh state. If AI systems are used for marriage/divorce purposes, they would rely on the digitized data available such as existing marriage registrations, which would exclude many of minority and indigenous communities and their practices.

Second, divorce alimony and maintenance money for Muslim couples is complex. For example, the Supreme Court of Bangladesh have used Surah Al Baqarah (The Cow) Ayat 241 in various ways to grant women their right to maintenance. There is no “fixed amount” mentioned in this verse, and legal interpretation and flexibility of the court is needed. This flexibility provides a chance for women, particularly those from marginalized communities, to receive compensation and right to maintenance when social norms and structures of power are stacked against them. This flexibility and form of legal interpretation cannot be replicated as easily by an AI model.

Third, Muslim marriages in Bangladesh have den mehr, which is the money that the husband is obligated to pay his wife upon marriage. In practice, the husbands do not pay this money during the lifetime of a marriage leaving the entire or partial amount as “baki” in the kabinnamma/nikkahnama. In the case of a divorce, the husband is obligated to pay the “baki” in full. These kinds of cases pose complex negotiations on the ground. My ethnographic research suggests that women from rural Bangladesh often use the “baki” amount to negotiate with their spouses and in-laws during divorce. In a patriarchal society, this small leeway provides a small opportunity for women to try and meet at least a few of their demands. There are many instances where the husband and/or his family mention in the kabinnamma that the full den mehr amount is paid, when it is not. Women in rural Bangladesh rely on the family and community’s memory to help them get the den mehr money in these circumstances. AI models rely on structured data, which can be codified. But as we know, the reality on the ground is not as structured. These complex situations that are rooted in gendered and dominant power structures, require context and nuance, which AI systems lack. Using these systems for marriage and divorce have the potential to discriminate against women as human beings handling these cases can employ judgment by contextualizing the situation. It is important to start thinking about these issues before we introduce laws that allow AI to help make decisions in these sectors of life.

 

 

Using AI in the Judiciary

There is a prevalent assumption that AI will provide a fairer legal system. The AI Policy draft states that AI will be used to help the judiciary in several ways such as “case processing, tracking, scheduling, legal research, document analysis, prediction of case outcomes, transcription, translation of proceedings, and providing legal recommendations to assist the court” (Section 4.1.5). There are several problems of using AI in law in the following manners suggested. I draw my opinion from my own personal research on AI and law. My doctoral dissertation – Digitizing Law: Legal Pluralism and Data-Driven Justice examined the shortcomings of using AI in law (based on the Bangladesh context) and here are some findings from my research.

A potential concern is using AI for translation. Translation is a complex process that requires context and perspective to decode meaning. In law, statutes, acts, and testimonies can often require interpretation and flexibility instead of “direct” translation.  In 2021, Bangladesh Supreme Court announced they would use the software made by the Ek Step Foundation of India – Amar Vasha – to translate judgements and court orders from English to Bangla. This sounds great in theory but poses many problems on the ground. For example, I conducted research on how consensual elopement cases are sometimes called “uthai nawa” or “tule nawa in the lower courts. There are many cases where a woman testifies that a man “took” her and forced her to marry her –  “O amake uthai nise/ tule nise.” This language of coercion can imply that a man has kidnapped or abducted her against her will. Yet, a closer ethnographic approach to many of these cases reveal that there are several reasons why a woman who consensually eloped with her partner can use such language of coercion. Due to the asymmetrical gendered power structure in Bangladesh, the language of coercion can help women mitigate social ostracization from the community as a woman of good character is expected not to engage in romantic/sexual relationships, let alone elope with their partners. Sometimes, women’s parents or guardians file abduction charges to force the woman to come back to the family and abandon their partner even when they claim otherwise.

Either way, using automated translation tools to interpret a woman’s testimony is complex as local knowledge and context are required to parse through the meanings as they are situated within a cultural setting with expected social norms of what women can say or cannot say. Google Translate’s result of “O amake uthai nise” is “He picked me up,” which does not reflect the complexity of such testimonies. Moreover, the dialect of uthai nawa/tule nawa is more popular in East Bengal than West Bengal. AI translation tools of Bangla–English developed by India are grounded in the dialect spoken in West Bengal and not East Bengal where Bangladesh is located. This can lead to discriminating automated results as well. For example, Dipto Das et al. examine Bengali sentiment analysis to show how computational tools such as Natural Language Processing (which is machine learning that engages with human language) can be biased and replicate the worldviews of designers, particularly from dominant groups.

The AI Policy also states that AI might be used for the “prediction of case outcomes” and for “providing legal recommendations to assist the court.” This means that using AI to help determine the verdict of a case is allowed (and encouraged) within the parameters of this draft law. It is important to question whether an AI model can accurately determine the verdict of a case. An AI model is only as good as the datasets within its system. This means that if digital data stored in databases are not diverse and inclusive, then AI systems will learn the patterns from skewed data, resulting in flawed outputs. Despite the Digital Bangladesh movement, most of our data are not digitized, and so our databases do not have comprehensive data. I have written a peer-reviewed article – Neocolonial Digitality: Analyzing Digital Legal Databases Using Legal Pluralism – on how digital legal databases of Bangladesh only include a handful of cases from the Supreme Court of Bangladesh, noting the biased decisions machine learning models would provide if AI are trained using these biased and limiting databases. My article demonstrates how AI Judges would provide suggestions that discriminate against women in consensual elopement cases. Human lives are complex and cannot be datafied as easily. It is important for us to not fall trap to the current AI hype and hastily introduce AI systems in courtroom practices without considering the potential inequality and unfairness it can cause.

In terms of AI predicting a case, they have to be trained using case judgements, and so we must look at where the digital legal databases are located. Currently the most popular digital legal databases for Bangladesh court judgements are in legal research software, e.g. Manupatra/BdLex (which is based in India). The very few selected Bangladesh cases are almost all in English rather than Bangla. It is unclear who selects which cases get digitized, but Bangladeshis do not play a significant role in deciding which of their cases make it to the online archives in legal research software (that are mostly developed in other countries). In other words, if predictive algorithmic models are developed using digital court records available online, it is limiting because of the narrow records available online and the lack of Bangladeshi field experts (law, tech-industries, etc.) included to develop these technologies.

Moreover, the current AI Policy draft does not protect Bangladesh citizens from their data being collected and used by companies from other nations. Since we import so many technologies from elsewhere, the people of Bangladesh need laws to protect their data from being misused by local and global private companies and states.

 

 

Importing “Foreign” AI Systems: Problematizing Transparency

The AI Policy emphasizes multiple times that transparency is important, particularly in terms of collection, storage, and usage of data. Section 3.2 states: “Ensuring transparency and lines of accountability in the collection, storage, and usage of data in AI systems to ensure that the decision-making process is explainable and interpretable, allowing users and stakeholders to understand how AI arrives at decisions, and can challenge them.” This is a commendable goal, but there are several problems that can prevent transparency despite the good intentions this approach might have.

First, private companies have privacy and copyright laws that would prevent regulators from overseeing their AI systems. Even if external regulators and auditors of data are allowed for Bangladesh companies, the reality of the situation is that most of the AI systems used in Bangladesh are from companies abroad. For example, Bangladesh citizens usually use U.S. products such as Google (for email and Internet search), Microsoft for office related work, social media platforms such as WhatsApp and Meta. The China based platform TikTok is a growing popular app in Bangladesh with over 37 million users in the region. Most of Bangladesh citizens’ data are stored in datacenters in other countries, and not in Bangladesh. This is because the foreign companies that dominate the Bangladesh market are Google Cloud, Microsoft Azure, IBM Cloud, Oracle, and so on, and they have power and control over Bangladeshi users’ data.

While seeking for transparency has benefits, there are many other variables that lead AI to produce “random” outputs that are not endorsed by the programmers themselves. This is because AI can be unpredictable. In a book chapter, “Knowing Algorithms”  Science and Technology Studies scholar Nick Seaver suggests that even if we had insider knowledge and access to how code is written, code is complex and written and updated by many hands over time; this makes make it difficult to pinpoint what caused the “random” output and makes code hard to “fix.” That is why advocating for transparency and auditing code might still be limiting in many ways.  That is not to say that institutions developing, promoting, and implementing these tools should not be held accountable for the harms generated by these products. Similar to the EU’s AI Act, it is important for Bangladesh’s AI Policy to also hold accountability of the producers of AI technologies as well as those implementing these technologies in their relevant fields.

 

 

AI and Environmental Costs

A common narrative that tech companies promote is that AI helps to reduce the damages from global warming and climate change. The cloud is often used as an example for this. When we store data in cloud, where does the data go? In the book A Prehistory of the Cloud Tung Hui Hu points out, our data does not magically disappear into an abstract place in the cloud, but rather situated physically in data centers. Far from being environmentally friendly, these data centers draw on huge amounts of power (water and electricity) and take up vast amounts of land. Karen Hao’s investigation examines how Microsoft promotes the rhetoric that AI can be used for climate innovation, but at the same time, the company provides technology that support the fossil-fuel industry. In other words, developing and using AI technologies can have environmental costs in various ways that we should pay attention to.

Potential laws we can study to develop our AI laws to protect the environment are Artificial Intelligence Environmental Impacts Act of 2024 and EU’s AI Act. While these laws are points of references for us to study, it is important for the people of Bangladesh to consider its own context and include minority and indigenous people’s knowledge of climate change and strategies developed by local communities within our laws. Collaboration between indigenous communities and minority groups as well as the academics, NGOs, and technology specialists who have been working on these issues in Bangladesh is key.

 

 

Automated Misinformation/Disinformation

During the student protests as well as the days after Sheikh Hasina’s departure from Bangladesh, social media platforms were flooded with various forms of propaganda. During this time of chaos and uncertainty, a few groups within Bangladesh enforced violence on the minority communities. This led to many right-wing Indian supporters to produce and/or disseminate fake images and videos of what was happening in the country. Manipulated photos and videos as well as digital media from past incidents were also disseminated by various political parties for their own agenda to cause confusion within Bangladesh, and the international crowd watching. While misinformation/disinformation is not a new occurrence, generative AI can lead to new forms of precarity.

With the rise and accessibility of generative AI, it has become easier to manipulate digital photos and videos, leading to the danger of propagating misinformation/disinformation. A prominent danger we face today is related to deepfakes. Deepfakes refer to “media generated and manipulated by AI” as defined by Karen Hao (Hao 2021). Hao’s article  “A horrifying new AI app swaps women into porn videos with a click” explains how it is easy for anyone to create deepfakes, providing new means of revenge porn that put women in precarious positions. Many Bangladesh NGO reports have explored how private photos of women are leaked online by ex-partners, leading to online and offline bullying, which can result in women committing suicide. For example, in a paper published by brac institute of governance and development (bigd) Mahpara et al. explore how there are existing practices of fabricating images of women into pornographic images. With deepfakes, it is now possible to generate revenge porn videos even outside of intimate relationships. Deepfake videos can be made using photos gathered from social media accounts, and images of only the face are enough to create such videos. These videos can appear as though they are “real” making it easier to blackmail women and dishonor them publicly.

Good deepfake detectors are not widely available, and the scandal of having such videos available publicly or circulating via semi-private channels like WhatsApp is harmful for women, particularly in South Asia. Even if deepfakes are found to be fake in courtrooms, Bangladesh and South Asia more broadly, has histories of mob violence which can occur spontaneously. Deepfakes involving women as well as queer communities could lead to harm and even death (be it homicide or suicide) even before the court can prove that the video is fake. Bangladesh AI laws must work towards the protection of people from such harm and hold relevant parties accountable for such action. As AI is being integrated in all aspects of life, my hope is to foster dialogue on how we can approach Bangladesh’s law and policy to protect the people in Bangladesh, and make sure we have the (legal) tools and social awareness to ensure that our minority and marginalized communities are safe from automated misinformation/disinformation.

 

 

Moving Forward: Recommended Amendments

Thinking about how to protect people from discriminatory AI systems is a conversation that many nations are having worldwide. Countries worldwide concur that it is vital to ensure people are protected when AI systems fail or when they result in the harm of individuals and communities. A law that people often refer to is EU’s The General Data Protection Regulation (GDPR) Article 22 as it limits autonomous decision making: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects.” This law was used in a recent case in Europe. Drivers working for Uber and Ola demanded they have the right to access their data and demanded justice when AI determined they should be fired. The Netherlands Court of Appeals ruled primarily in favor of gig workers by employing the GDPR Article 22.

Despite this law being revolutionary in its time, there is still room for improvement. For example, this law protects people from decisions made “solely on automated processing” but what about the decisions that were made partially by automation and partially by humans? There is a loophole that can excuse companies as they can shield themselves with the justification that decisions were made using both automation and human beings. In our AI Policy, we have the scope to address this gap. GDPR Article 22 is also limiting as it focuses more on automated decisions that have a “legal effect;” discriminatory practices that do not result in legal effects demand attention as well. If Bangladesh’s AI laws and regulation can address these gaps to protect people from automated decision-making, we have the scope to protect the people in our nation, but also provide a roadmap for other nations to follow.

Another good place for us to study to think about how to develop our AI laws is the very recent EU Artificial Intelligence Act published in March 2024. Article 5: Prohibited Artificial Intelligence Practices aims to protect people from discriminatory AI practices and can protect people from algorithmic biases in many ways. For example, with the discriminatory hiring/firing processes by AI as mentioned previously, the EU AI Act includes the following safety measure:

“(b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm”

This law also protects people from their images being taken without their consent to train Facial Recognition Technology (FRT) as well as from automated predictive policing and mass surveillance, which ample research has stated discriminates against the minority and marginalized communities within society.

Note, this is by no means advocating to follow and adopt “Western” laws. We are still paying the price for colonial laws that are rooted in our current legal system. While AI laws from elsewhere can help us locate potential problems and note how other nations are thinking about protecting people from emerging algorithmic technologies, there are issues that AI systems can create particularly in the Bangladesh context that need attention. The marriage/divorce examples mentioned previously highlights why we need laws to regulate AI that cater to the experiences and culture of our people. This is a call for people from all walks of Bangladesh to collectively think about how these systems might impact us, our loved ones, and our communities.

 

 

Final Thoughts

Like many countries, Bangladesh’s digital vision focuses on enhancing robotics and automation. We should start thinking about these issues since once laws are enacted it is difficult to amend them. We only updated the Evidence Act, 1872 to include digital evidence in 2022. We should plan ahead and ensure as much protection for people as we can today.

I am not a technophobe who shuns emerging technologies. I admit it can be used for various purposes in meaningful ways such as healthcare. For example, researchers of cancer use predictive modeling techniques to analyze data from medical databases to identify patterns and alterations that can help identify early stage cancer and develop drugs. Note, scholars of HCI, STS, and media studies have noted that these databases can be skewed with little or skewed data of people of color, which can result in flawed results for such groups. I do not denounce AI and other related technology, but rather think it is important to point out the flaws in how they are designed and being used.

I write this article for three main reasons. First, I want to demystify AI and help explain the flaws of using these systems from a socio-technical perspective. My goal is to discuss the harms AI systems (can) have in society if they are left unchecked in our laws. Second, I aim to foster conversations and dialogue with the various communities in Bangladesh (journalists, feminist and human rights activists, lawyers, academics, entrepreneurs, public officials, artists, computer scientists, engineers, the hijra community, Muslim and non-Muslim minority groups, indigenous groups, Rohingya refugees, and so on) so that we can share perspective(s) and learn from each other. It is important to collaborate and collectively think about how to incorporate AI in society for public interest while also mitigating its harms. Third, this is a call to the interim government and the next government of Bangladesh to address these shortcomings and support the regulation of AI and other digital laws in ways that center on the well-being of the people of the nation and protect them from dominant forms of power – be it local or global.

Salwa Hoque

Community Member

Dr Salwa Hoque is a postdoctoral research fellow at AIAI Network at Emory University, and a visiting fellow Information Society Project at Yale Law School.