Roger Clarke's Web-Site

 

© Xamax Consultancy Pty Ltd,  1995-2018


Roger Clarke's 'Submission re AI Report'

Submission to Department of Industry, Innovation & Science
re 'Artificial Intelligence: Australia's Ethics Framework'

Version of 20 May 2019

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2019

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/EC/AAISub-1905.html


Abstract

If the April 2019 version of 'Artificial Intelligence: Australia's Ethics Framework' were to be adopted, serious harm would be done, to Australian society, to the reputation of the Australian AI industry, and to the prospects of AI's potential contributions being realised.

The document is inadequate on a number of grounds. This Submission addresses three:


Contents


Introduction

This is a Submission in relation to

Dawson D and Schleiger E, Horton J, McLaughlin J, Robinson C, Quezada G, Scowcroft J, and Hajkowicz S (2019)
'Artificial Intelligence: Australia's Ethics Framework'
Data61 / CSIRO, April 2019, at
https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/supporting_documents/ArtificialIntelligenceethicsframeworkdiscussionpaper.pdf

On the one hand, that document - hereafter 'the Report' - contains some valuable discussion, covering a wide territory.

On the other hand, the Report contains some very significant weaknesses.

I note, and record my support for, the following Submissions:

This Submission is complementary to those documents.

I address here three aspects:

This document draws heavily on articles that are currently in press with the highest-ranking journal of technology law and policy, Computer Law & Security Review.


1. The Threats Inherent in AI

AI takes many forms, and they are complex and obscure. It is therefore unsurprising that public understanding is limited and confused, and media commentaries on the negative impacts of AI are often not well-informed.

AI embodies several very serious threats, which can be summarised as follows:

AI gives rise to errors of inference, of decision and of action, which arise from the more or less independent operation of artefacts, for which no rational explanations are available, and which may be incapable of investigation, correction and reparation

The issues need to be analysed at a much more granular level than that, however, and they then need to be addressed. If that approach is not adopted, deployment of AI will result in serious harm, AI will be resoundingly rejected by the public, and the potential benefits that AI offers will not be achieved.

A distillation of the harms that AI gives rise to identifies the following aspects. For further detail on each aspect, see s.4 of Clarke (2019a):

  1. Artefact Autonomy
    This relates to the appropriateness or otherwise of delegation by humans to artefacts.
    See s.4.1
  2. Assumptions about Data
    This encompasses issues relating to data quality, data meaning, and the compatibility of data acquired from multiple sources.
    See s.4.2
  3. Assumptions about the Inferencing Process
    The issues include the applicability of the inferencing process to the particular problem-category or problem-domain; the suitability of the available data as input to the particular inferencing process; the approach taken to the problem of missing data-values; the need for independent analysis and certification; and the widespread, cavalier claims that empirical correlation unguided by theory is enough, and that rational explanation is a luxury that the world needs to learn to live without.
    See s.4.3
  4. The Opaqueness of the Inferencing Process
    The lack of transparency about the processes used to draw inferences from data gives rise to further conditions which together undermine the principles of natural justice and procedural fairness, and breach reasonable public expectations. See s.4.4. Key concerns are:
  5. Irresponsibility
    There is widespread failure to assign appropriate responsibilities to relevant organisations, because of inadequate discrimination among the various stages of the AI industry supply-chain from laboratory experiment to deployment in the field.
    See s.4.5

The Report contains mention of some of these elements. However, the Report's discussion of the serious threats that AI embodies falls a long way short of nailing down the issues that must be addressed. Inevitably, this leads to omissions and inadequacies in the principles, and to the underestimation of public risk.


2. Principles for Responsible AI

The last few years have seen considerable nervousness about the impacts of AI on society, and about the impacts of negative public opinion on AI. As the Report makes clear, many sets of principles have been put forward. These have reflected concerns about the serious risks that need to be understood and managed (expressed most often by civil society, and to some extent by government agencies that address social issues), and concerns that the potential perceived to exist in AI may not be realised (expressed by corporations and industry associations, and by government agencies that address economic issues).

During late 2018 and early 2019, I analysed about 30 such documents. Based on that analysis, I consolidated a set of 10 Themes and 50 Principles that are evident in them. The details are in Clarke (2019b), and specifically here:

The Report presents a set of 'core principles' on p.6. These are clearly intended to be at the heart of the proposal, because they are central to the first 5 of the 7 questions asked. Further, because organisations are looking for recipes, the core principles would be read by many without reference to the associated discussion.

It is necessary to read the 'core principles' in isolation, because the rest of the text in the Report is discursive, and is not designed to explain or articulate each of the 'core principles'. In raw terms, the 'core principles' cover only 12 of the 50 Principles extracted from existing literature and presented in Clarke (2019b). Even allowing for a number of points that are arguably implicit, the 'core principles' proposed in the Report score in the 'Bad Fail' range of 25-30%.

For details of the analysis delivering that result, see Appendix 1 to this Submission.

Here is a selection of 10 omissions from the principles published in the Report. These are quite fundamental to managing public risk. The treatment of the 38/50 missing principles as though they were 'non-core' is simply indefensible:

The scope of the 'core principles' is not only extraordinarily narrow when measured against the 50 Principles, but also when compared to leading exemplars. For example, the European Commission's Guidelines score 74% EC (2019). The OECD's set is due for publication, possibly as soon as late May 2019. I have contributed to the OECD's discussions in my role as a member of its civil society expert group (CSISAC). My impression is that the final version (which I have not yet seen) is also likely to score comfortably over 50%, perhaps as high as 75%.

If guidance as clearly deficient as the 'core principles' were to be given any form of imprimatur by an Australian government agency, the country would be declaring itself to be an international backwater. This would add to the serious harm that the Telecommunications and Other Legislation Amendment (Assistance and Access) Act has already done to the international reputation of the Australian IT industry. Further, the document would be treated with derision by the public and the media, resulting in stronger opposition to the deployment even of those forms of AI that are capable of responsible application.


3. The Need for Regulatory Measures

A number of the changes that AI brings with it are quite fundamentally threatening to Australian values. The Report as it stands appears to assume that self-regulation might be enough to cope with dramatic change. This position would be at best quaint, and at worst disingenuous.

The threats embodied in AI are so serious that they justify application of the precautionary principle. The soft form of the precautionary principle can be stated as follows:

If an action or policy is suspected of causing harm, and scientific consensus that it is not harmful is lacking, then the burden of proof falls on those taking the action

The series of papers that this Submission applies concludes that AI is a suite of genuinely disruptive technologies that have features that render existing laws ambiguous and ineffective, and that have great potential to harm society. Because important public interests are at serious risk, policy-makers need to urgently structure a regulatory regime to ensure that AI is approached in a responsible manner. The more substantial form of the precautionary principle is relevant to AI:

"When human activities may lead to morally unacceptable harm that is scientifically plausible but uncertain, actions shall be taken [in advance] to avoid or diminish that potential harm" (TvH 2006)

A range of regulatory approaches is possible, discussed at length in sections 1-5 of Clarke (2019c). That paper concludes that the problems are, at this stage, far too poorly understood to enable expression of detailed protections in statute. On the other hand, a co-regulatory approach, comprising a number of specific and critical features, may be capable of delivering the necessary outcomes. For detailed discussion, see section 7 of that article.

Note that this Submission is not criticising the Report for not solving these problems. That requires attention by a differently-constituted organisation, most likely the Australian Law Reform Commission. However, the Report needs to acknowledge that self-regulation by AI providers, by AI industry associations, by AI users including government agencies, and by user associations, is inadequate, and that a regulatory response is necessary.


4. Conclusions

The Report's text raises false hopes of a responsible approach to AI.

This Submission concludes that:

  1. the analysis of harms in the Report is inadequate;
  2. the set of principles that the Report contains is so limited as to be harmful to Australian society, and to Australia's reputation in the AI field; and
  3. the lack of clear acknowledgement of the need for regulatory action further undermines the credibility of the propositions that the Report contains.

It is vital that the Report not be adopted, and not be referred to with approval by Australian government agencies.

Substantially enhanced foundational analysis is essential. This needs to reflect the issues raised in this Submission and those of Ms Johnston and Dr Mann.

A suitable replacement analysis, which applies Clarke (2019b), EC (2019), and the forthcoming OECD document, can deliver a comprehensive set of Principles for Responsible AI. A document of that nature can protect Australian society, by providing appropriate guidance to entities at all stages of the AI industry value-chain, including Australian government agencies, can protect Australian industry, and can provide a firm foundation on which socially- and economically-positive applications of AI can be developed and deployed.


5. The Questions Asked

Rather than simply respond to the questions posed, this Submission has presented an analysis based on prior research on the topic.

In order to provide some degree of fit to the evaluation template, the following brief responses are provided - but the brief responses below of course need to be read within the context set by the main body of this Submission:

1. Are the principles put forward in the discussion paper the right ones? Is anything missing?

A great many principles are missing.

2. Do the principles put forward in the discussion paper sufficiently reflect the values of the Australian public?

No, the principles in the Report are seriously inadequate.

3. As an organisation, if you designed or implemented an AI system based on these principles, would this meet the needs of your customers and/or suppliers? What other principles might be required to meet the needs of your customers and/or suppliers?

The set of 10 Themes and 50 Principles in Clarke (2019b) needs to be addressed in full.

4. Would the proposed tools enable you or your organisation to implement the core principles for ethical AI?

The specifications are too sketchy to provide real guidance. Section 3 of Clarke (2019b) discusses risk assessment and management generally, and Section 3.3 presents a specific approach to Multi-Stakeholder Risk Assessment and Risk Management.

5. What other tools or support mechanisms would you need to be able to implement principles for ethical AI?

The Report needs to provide more specific guidance on how to embed social responsibility into organisational culture.

6. Are there already best-practice models that you know of in related fields that can serve as a template to follow in the practical application of ethical AI?

There is regrettably little in the way of good practice in organisations in such areas. The threats inherent in AI make it vital that good practice models be rapidly developed.


References

Clarke R. (2019a) 'Why the World Wants Controls over Artificial Intelligence' Forthcoming in Computer Law & Security Review, PrePrint at http://www.rogerclarke.com/EC/AII.html

Clarke R. (2019b) 'Principles and Business Processes for Responsible AI' Forthcoming in Computer Law & Security Review, PrePrint at http://www.rogerclarke.com/EC/AIP.html

Clarke R. (2019c) 'Regulatory Alternatives for AI' Forthcoming in Computer Law & Security Review, PrePrint at http://www.rogerclarke.com/EC/AIR.html

EC (2019) 'Ethics Guidelines For Trustworthy AI' European Commission, April 2019, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=58477

TvH (2006) 'Telstra Corporation Limited v Hornsby Shire Council' NSWLEC 133 (24 March 2006), esp. paras. 113-183, at http://www.austlii.edu.au/au/cases/nsw/NSWLEC/2006/133.htm


Appendix: Evaluation of the Report Against the 50 Principles

The Report's 'Core principles for AI' are reproduced, followed by
italicised cross-referencing to the 50 Principles in Clarke (2019b)

1. Generates net-benefits.
The AI system must generate benefits for people that are greater than the costs.

Corresponds to number 4.3 of the 50 Principles in Clarke (2019b)
There is no requirement for, but there is weak implication of, 1.4, 1.8, 4.1, 4.2

2. Do no harm.
Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes.

1.9, 3.6, 4.1, 5.6

3. Regulatory and legal compliance.
The AI system must comply with all relevant international, Australian Local, State/Territory and Federal government obligations, regulations and laws.

5.2
(Legal compliance with other laws is a 'given' rather than an AI Principle)

4. Privacy protection.
Any system, including AI systems, must ensure people's private data is protected and kept confidential plus prevent data breaches which could cause reputational, psychological, financial, professional or other types of harm.

7.4, 8.2
This is arguably also a weak form of 3.3. (Australian data protection law falls far below public expectations, because it is riddled with exemptions, exceptions, overrides and designed-in loopholes)

5. Fairness.
The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the 'training data' is free from bias or characteristics which may cause the algorithm to behave unfairly.

5.1

6. Transparency & Explainability.
People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.

6.1
Note that the detailed description does
not implement the 'explainability' element of the heading.

7. Contestability.
When an algorithm impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.

This embodies a weak implication of 9.2 and 10.1

8. Accountability.
People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm, even if the impacts are unintended.

9.2
This embodies a weak implication of 9.2 and 10.1

Total 12 / 50 = 24%, comprising 1.9, 3.6, 4.1, 4.3, 5.1, 5.2, 5.6, 6.1, 7.4, 8.2, 9.2, 10.1

Plus weak implication of a further 4 - 1.4, 1.8, 4.2, 3.3


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University. He is a longstanding Board-member and past Chair of the Australian Privacy Foundation, and Company Secretary of the Internet Society of Australia. However, this Submission is made in a personal capacity.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 50 million in early 2015.

Sponsored by Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 8 May 2019 - Last Amended: 20 May 2019 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/AAISub-1905.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2017   -    Privacy Policy