Submission: White House Office of Science and Technology Policy on National AI Priorities

Abdullah Hasan Safir

Community Member

Theodora Skeadas

Community Member

Sabhanaz Rashid Diya

Founder and Executive Director

The Biden-Harris Administration is developing a National Artificial Intelligence (AI) Strategy that will chart a path for the United States to harness the benefits and mitigate the risks of AI. This strategy will build on the actions that the U.S. Government has already taken to responsibly advance the development and use of AI. In June 2023, the White House Office of Science and Technology Policy (OSTP) requested public comments to inform this strategy on updating U.S. national priorities and future actions on AI.

Tech Global Institute members and advisors, Shahzeb Mahmood, Sabhanaz Rashid Diya, Sheikh Waheed Baksh, Abdullah Hasan Safir and Theodora Skeadas, submitted comments to OSTP in July 2023, mentioning considerations in the governance of AI, and the role of U.S. to accelerate international cooperation for ethical, inclusive and responsible AI. Specifically, we stress the impact of AI on low- and middle-income countries (“Global Majority”) and propose necessary safeguards to ensure the benefits of AI are equitable with design and governance as essential levers. We outline key takeaways below. The full comments are available here.

  1. AI ethics should be grounded in robust international human rights framework to address the global value alignment gap
    AI governance initiatives branded as “AI ethics” or “responsible AI” are based on the philosophical discipline of ethics. However, ethics is a malleable concept lacking universally agreed normative foundation, indivisibility, and enforceability inherent in human rights standards. International human rights laws are grounded in a well-calibrated approach that considers the legality, necessity, and proportionality of rights limitations, as well as an ecosystem for provision of redress for the public in case of violations, thereby offer a predictable, progressive, and potent benchmark for AI governance.
  2. Transparency is critical to increasing accountability among companies developing AI systems
    We recommend companies share externally information on data governance (to increase algorithmic transparency), bias and individual harm mitigation, societal concerns and dangerous capability monitoring. Information shared should be independently audited and updated at a regular frequency.
  3. The design of AI models should be culturally and socially situated
    Common Crawler (used for most AI systems) uses only textual languages on the internet, which is predominantly in English. AI development should therefore be evaluated not only on the fairness of the technical systems, but also the environments in which it is developed, the diversity of the AI development team and their responsive to languages and multi-linguality.
  4. The U.S. should work with international partners, including multilateral bodies, to establish labor protections for AI workers
    The AI industry is built on the manual and repetitive data labeling and moderation tasks performed by low-skilled, low-wage workers in Global Majority regions, whose collective “ghost work” market value is estimated to reach $13.7 billion by 2030. There is an urgent need for international cooperation to establish labor protections for low-wage workers similar to parallel efforts in the readymade garments and manufacturing industries. 
  5. International cooperation is critical to mandate impact and fairness assessments of AI systems
    Before implementing AI-driven projects at scale, particularly outside of the U.S., it should be mandatory for companies to undertake and publicly share impact and fairness assessments, including inviting comments from stakeholders. This is important because AI systems can have disproportionate negative impacts on underserved communities, specifically among ethnic, sexual and religious minorities, as well as low- and- middle-income countries. Post-hoc fixes are unlikely to be effective against deeply rooted biases.
Abdullah Hasan Safir

Community Member

Abdullah Safir is an AI ethics and critical design researcher and PhD candidate at the University of Cambridge.

Theodora Skeadas

Community Member

Sabhanaz Rashid Diya

Founder and Executive Director

Sabhanaz Rashid Diya is the founding board director at Tech Global Institute and Senior Fellow at Centre for International Governance Innovation.