Reports  |    |  June 28, 2019

Artificial Intelligence: Australia’s Ethics Framework – Response by the Law Council of Australia

Report by the Law Council of Australia. Response to Artificial Intelligence: Australia’s Ethics Framework (Discussion Paper). 36 pages.

Executive Summary

The Law Council of Australia welcomes the consultation of the Department of Industry, Innovation and Science (DIIS) into Artificial Intelligence: Australia’s Ethics Framework (Consultation), and it is pleased to make a submission in response to the Data61 CSIRO discussion paper Artificial Intelligence: Australia’s Ethics Framework (Discussion Paper).

2. The Law Council believes that new and evolving technologies, including artificial intelligence (AI), machine learning and other forms of automated decision-making offer important benefits, including the potential to contribute to strengthening the economy, increasing the cohesion and inclusiveness of society, supporting sustainability and the efficient use of resources, and increasing human wellbeing. However a number of significant risks and challenges are also present, which it is necessary, and timely, to discuss.

3. The Discussion Paper explores a number of these issues, and proposes an ethical framework comprising eight core principles for AI, as well as a toolkit of processes, safeguards and resources to support the implementation of that framework. Key issues raised by the Law Council in responding to the Discussion Paper include:

  • (a) the rapid development of AI and related technologies is in many respects outpacing the legal and regulatory frameworks necessary to guide and govern them, giving rise to risks that the privacy and other fundamental rights of individuals may be placed at risk;
  • (b) an ethics framework for AI should be rights-based and strongly grounded in overarching principles, including those drawn from international human rights law, and subject to principles of the rule of law and procedural fairness;
  • (c) to be effective and consistently applied, an ethics framework must be enforceable. Further, careful discussion is required regarding potential models of an enforcement mechanism, which may vary between contexts;
  • (d) establishment of a regulatory body should be carefully considered to provide oversight of AI systems and their development and use; (e) accountability and liability associated with the development, implementation and use of AI systems should be determined so as to provide remedies in cases where damage is caused, and to allow for appropriate scrutiny by the Courts; and (f) greater distinction (and further discussion) is needed between public and private sector applications of AI, noting the different principles of administrative law which apply particularly to decision-making by government organisations and other public sector entities.

4. This submission puts forward a number of recommendations which, in summary, address the following issues:


  • (a) the need for a flexible and inclusive definition of AI;

Question 1: are the principles put forward in the discussion paper the right ones? Is anything missing?

  • (b) inclusion of a principle of ‘respect for human rights and human autonomy’;
  • (c) further consideration of what constitutes ‘net-benefits’ and their subjection to principles of the rule of law and equality before the law; and a requirement that AI systems disclose their benefits and detriments;
  • (d) expanding the principle of ‘doing no harm’ to address system design, integrity and vulnerabilities over time; and restriction of the use of AI for ‘scoring’ of citizens, in line with a rights-based framework;
  • (e) formal implementation of compliance measures to provide oversight, enforcement and redress; as well as a requirement for registration with and periodic audit by an independent regulator;
  • (f) further refinement of the principle of ‘privacy protection’; consistent use of privacy-related terminology; further discussion of lawful, fair and transparent data-handling and necessary restrictions arising from privacy law; limiting the use of personal information for secondary purposes; addressing data quality to reduce bias; and consideration of a federal Charter of Rights as an ethical foundation for AI systems;

Question 2: Do the principles put forward in the discussion paper sufficiently reflect the values of the Australian public?

  • (g) importance of inclusive public consultation with people likely to be affected by AI;
  • (h) an ethical framework for AI to be based on human rights principles, informed by good precedents and giving further consideration to inclusion of employment protections;

Question 5: What other tools or support mechanisms would you need to be able to implement principles for ethical AI?

  • (i) capability of regulatory bodies and key civil society actors to be increased, complementary to the role of an AI regulator;

Question 7: Are there additional ethical issues related to AI that have not been raised in the discussion paper? What are they and why are they important?

  • (j) further consideration to be given to the application of administrative law principles to AI, including where used by government and public sector decisionmakers;

General observations:

  • (k) caution recommended with regard to linking levels of risk to numbers of persons affected;
  • (l) further consideration to be given to data governance in AI; and
  • (m) a potential onus on AI systems to demonstrate accuracy to be considered.

5. The Law Council welcomes the opportunity to provide this submission and would be happy to elaborate further on any of the points addressed. [ . . . ]

Additional information at