Submission: Government of Australia on Safe and Responsible AI

Abdullah Hasan Safir

Community Member

Shahzeb Mahmood

Senior Researcher

Sabhanaz Rashid Diya

Founder and Executive Director

The Department of Industry, Science and Resources in Australia published a discussion paper in June 2023 to propose safeguards and additional measures to build public trust and confidence in AI technologies and systems. While Australia already has some safeguards in place for AI and the responses to AI are at an early stage globally, it is not alone in weighing whether further regulatory and governance mechanisms are required to mitigate emerging risks. The paper highlighted Australia’s ability to take advantage of AI supplied globally and support the growth of AI in Australia will be impacted by the extent to which Australia’s responses are consistent with responses elsewhere in the world. Consultations on the discussion paper sought help ensure Australia continues to support responsible AI practices to increase community trust and confidence. The paper built on the recent rapid research report on generative AI delivered by the government’s National Science and Technology Council (NSTC).

Tech Global Institute members are advisors, Shahzeb Mahmood, Sabhanaz Rashid Diya and Abdullah Hasan Safir submitted comments to the Department of Industry, Science and Resources of Australia, emphasizing the role of human rights principles in establishing guardrails for responsible AI. We argue that strengthening the existing laws on competition, consumer rights and data protection, complemented by a new AI regulation, can ensure the regulatory framework is future-proof, technology- neutral, and fit-for-purpose, and can address foreseeable harms associated with new technologies.

We outline key takeaways below. Our full comments are available here.

Existing legal constructs in Australia should be retained and applied to new technologies, but as the use of existing constructs meets their limitations, regulatory instrumentalism is necessary to align new technologies, such as advanced AI and ADM, to societal values. Laws will have to be proactive, and adopt a novel approach, combining data protection, consumer rights, competition, anti-discrimination and online safety legislations, within the broader architecture of a human-centric and rights-focused AI regulation.

The Australian Consumer Law (ACL) offers a basis to strengthen consumer rights when using AI technologies, however the law mostly responds to harms once they have occurred. Effective consumer protection in the digital ecosystem warrants a legal architecture that proactively places normative limits on conduct that are foreseeably harmful. The ACL should be amended to specifically incorporate foreseeable harms to consumers arising from AI and ADM, including by mandating algorithmic transparency, enabling correction of inaccuracies in training datasets to avoid bias and discrimination, restricting manipulative advertising and exploitative micro-targeting, providing a right to explanation of automated decisions and human review of such decisions, and protecting vulnerable consumers such as children, elderly individuals and individuals with disabilities.

Human rights audits and due diligence assessments should be incorporated throughout the AI product lifecycle. This is consistent with the draft Data Privacy Guidelines for the development and operation of Artificial Intelligence solutions issued by the UN Special Rapporteur on the Right to Privacy and the UN Guiding Principles on Business and Human Rights.

Designing AI technologies must respond to socio-technically situated plurality, which means that designers and developers need to care about the situations of the individuals whose lives are going to be affected by such designs. Approximately 3.2% of Australia’s population is made up of Aboriginal and Torres Strait Islander people and AI’s potential impacts on such vulnerable populations could lead to further marginalization.

AI systems operating in Australia should incorporate safety by design and privacy by design to reduce likely harms to minors, consistent with the concerns raised by the eSafety Commissioner of Australia. Rapid development of AI systems without adequate guardrails on data collection and labelling for training data pose disproportionate risks for minors, especially in terms of their privacy, exposure to harmful or age-inappropriate content, bias and discrimination and exploiting their vulnerabilities to be targets of unethical or manipulative advertising.

A risk-based approach for addressing potential AI risks only works is only effective if (a) it does not limit itself to use cases of artificial generative intelligence (AGI) rather takes into account the broader AI ecosystem, including use cases, significant feature modifications, third-party use cases and societal impact of these systems, and (b) it is complemented by a public impact assessment. What constitutes a “high risk” versus “low risk” use should not be predetermined, rather assessed on the basis of periodic and systematic review of the systems, their use cases and their impact.

 

 

 

Abdullah Hasan Safir

Community Member

Abdullah Safir is an AI ethics and critical design researcher and PhD candidate at the University of Cambridge.

Shahzeb Mahmood

Senior Researcher

Shahzeb Mahmood is a senior researcher at Tech Global Institute, specializing in Internet and technology laws. He previously served as legal counsel to telecom, Internet and FAANG companies.

Sabhanaz Rashid Diya

Founder and Executive Director

Sabhanaz Rashid Diya is the founding board director at Tech Global Institute and Senior Fellow at Centre for International Governance Innovation.