Last month, Anthropic release Claude’s Constitution, ushering the way for fresh debates on Constitutional AI and AI ethics. The researchers leaned on the Universal Declaration of Human Rights and Apple’s TOS, among other things, to base their safety and ethics values. While AI ethicists have long been working to evaluate and ensure ethical dimensions of AI systems, a significant gap continues to be the fact that most frameworks and subsequent AI principles that companies adopt are heavily skewed towards a few dominant Western perspectives. Claude’s Constitution calls this out as a principle, which is encouraging, but there remains under-investment in research that would actually outline what a “non-Western perspective” even means.
Not incorporating a non-Western or more global perspective into AI ethics and accountability pose several risks. In India, religious chatbots are increasingly condoning violence while speaking in the voice of god. Meanwhile, when Dolly v2 was asked about Islam, it said—in a matter of fact manner—the Quran tells Muslims to fight and subjugate non-Muslims. The consequence of this, especially at the pace at which LLMs like GPT-4 are getting released into the world, is fatal. It not only exacerbates ethnic and religious tensions in several parts of the world, results like this could actually lead to disproportionate violence against minorities.
Our research examines the existing landscape of AI ethics and accountability frameworks and provides recommendations on what key considerations need to be in place to ensure a non-Western perspective in the training, release and governance of AI and AGI.