Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Naive and Unethical AI'

The Current AI Push is Naive, Unethical, or Both

Review Draft of 4 October 2019

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2019

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/EC/AINoU.html


The current round of enthusiasm for AI is infecting people who should know better. Considerable public concern already exists about unfair, demeaning and incomprehensible applications of technology. The significance of each new blunder gets amplified through the media and social media.

This article is a response to recent publications by an organisation that people assume to be a 'thought-leader' in the ICT space, Data61. Its CEO was heavily quoted in an uncritical pump-piece (Innovation Australia, 16 September 2019).

That article drew on the organisation's consultancy report for the Department of Industry. But Data61's report presented an insufficiently detailed examination of disbenefits and risks. It also glossed over the inadequacies of protections in Australia for human rights, consumers and employees, and it proposed a set of 'core principles' that omits many critical elements.

Misleading Representations of What AI Is

The depiction of AI that Marshall offered was "AI refers to data-driven algorithms that can autonomously solve problems and performs tasks without human guidance". That formulation is unhelpful and misleading.

Important categories of AI do not deliver algorithms but merely a blancmange of pre-processed data. Rather than being oriented to problems and solutions, that blancmange fails to embody a model of the real world that it's being applied to, and hence ducks around problem-definition. So the extent to which it 'solves' anything is in serious doubt.

Moreover, the presumption that human guidance can be dispensed with is one of the most contentious aspects of the AI proposition. It's simply not a necessary feature, and in many circumstances it's a highly undesirable characteristic.

There are problem-spaces within which autonomous decision systems are highly appropriate (e.g. buggies on Mars, beyond the working-range of telecommunications-based control; and ongoing, real-time, fine adjustments to air- and sea-craft trim and stability). In complex physical and especially social and economic contexts, on the other hand, it's not just desirable, but essential, that the focus be on decision support systems, with artefacts working with and for humans.

Overblown Claims about AI's Potential

There's no doubt that some forms of AI research have delivered value, including a number of different forms of pattern recognition. The successes have featured awareness of the context of use, careful design, thorough testing, both internally and against the real world, safeguards, controls and audit. Some further forms will doubtless also graduate from wishful thinking to practical and valued application.

But the people who are pumping AI are repeating the blunders that have dogged everything bearing the AI tag for the last 60 years. Too often, wild enthusiasm obscures blithe assumptions. Marketing-speak attracts funding (at least during 'AI summer' phases), but funding is not the only pre-requisite for research breakthroughs adequate to satisfy the expectations. AI-spruikers have to confront reality: AI technologies have to overcome a vast array of challenges in order to become both effective and safe.

What's Actually Wrong with AI?

Many of the concerns expressed in the popular media are vague or misguided. In a recent article, I grouped the genuine issues into the following five areas.

1. Assumptions about Data

All forms of AI depend on data. That data is subject to a vast array of quality issues. Added to that are considerable uncertainties about the meaning of each item of data and of the values each item contains. Serious problems also arise from incompatibilities among data acquired from multiple sources. It is far too common for limited effort to be invested in understanding these factors and addressing the issues that arise from them.

2. Assumptions about the Inferencing Process

Inferencing processes are enthusiastically applied to many different problem-categories and problem-domains. Yet, all too often, analysts fail to adequately check the suitability of the available data as input to the particular inferencing process. Despite the range and complexity of the challenges, far too little effort is invested in evaluating the sensitivity of inferences to random data errors, to systematic data errors, and to different approaches to dealing with missing data values.

Cavalier claims are even made that empirical correlation unguided by theory is enough, and that rational explanation is a mere luxury that the world needs to learn to live without. That might appeal to technocrats, who it frees to get on with their sport, but not to those afflicted by the unjustified harm that this approach give rise to.

3. The Opaqueness of the Inferencing Process

There's a lack of transparency about most AI processes used to draw inferences from data. As a result, a humanly-understandable explanation is frequently not available to the people impacted by organisations' decisions and actions.

The inability to reconstruct the sequence of events denies independent parties (e.g. auditors, judges and coroners) access to records of initial and intermediate states, and to triggers for transitions between states. As a result, there is no reliable basis for the recognition of and reparation for errors, and fault-ridden processes continue unhindered.

The result is that organisations that are nominally 'accountable' can escape liability and sanctions. This state of affairs breaches public expectations and undermines the principles of natural justice and procedural fairness.

4. Artefact Autonomy

The loss of accountability arising from obscurity is compounded by the design of artefacts to operate more or less independently of humans. Where the artefact embodies a low-grade model of the domain, or even none at all, autonomy inevitably results in errors of inference, of decision and of action.

Irresponsible designs of such kinds are tantamount to an open invitation to the public to adopt Luddite attitudes to AI technology. The deep stupidity of Centrelink's (non-AI) Robodebt scheme, and the public reaction to it, are a harbinger of things to come.

5. Irresponsibility

A corollary of Arthur C. Clarke's Third Law is: 'With any sufficiently mysterious technology, the buck stops in the middle of nowhere'. It's essential to discriminate among the various stages of the AI industry supply-chain, from laboratory experiment to deployment in the field, and impose obligations appropriate to the roles each organisation and individual plays.

Naive Assumptions about Legal Protections

Discussions of ethics lead nowhere, and their primary contribution is to lay a smokescreen. Practices impact people, and practices have to be subject to law. In a recent article, I examined the alternative forms that regulatory schemes can take and concluded that a co-regulatory approach to AI is the solution that society needs.

The Data61 report, instead, presents a very shallow treatment of regulatory factors. It offers the comment that "Australia is a party to seven core human rights agreements which have shaped our laws" (pp. 5, 17-20), but fails to undertake analysis of the extent to which Australian laws actually implement protections.

Australia is one of the few countries in the world that has no entrenched human rights protections. Added to that, its data protection laws have been whittled away by designed-in loopholes and continual amendments, to the point that they're among the world's weakest. Australians are wide open to abuses by government and exploitation by business.

To the extent that AI, after decades of disappointments, actually delivers on the promises, the massive shift in the balance of power arising from mindless, data-based automation will be to the serious detriment of human freedoms. Even where AI again drops a long way short of its promises, it will be difficult to eradicate harmful, entrenched and in some cases embedded applications, and the mind-set that 'the machine is right'.

Principles for Responsible AI

A large number of organisations have uttered sets of principles. Based on a consolidation of principles from 30 such sources, I recently assembled a comprehensive suite of '50 Principles for Responsible AI', and used them to score various proposals.

Some of the documents, particularly those from IT suppliers, are blatant public relations exercises (e.g. a raw count of the principles that are recognisable in the documents scores Microsoft at 10%, Google 14% and IBM 14%). Industry associations are motivated to avoid regulatory action, and hence might be expected to feel the need for credibility and therefore offer a little more substance. Yet they also score in the range 10-14%.

Even non-governmental and governmental organisations achieve disappointing scores, with 13 of the 14 in the range 8-34%. The OECD had the advantage of being a latecomer to the scene, publishing only in May 2019. It was dragged down by the deadweight of dominance by US industry, and evaluation of the OECD document delivered a lamentable score of 40%. To date, only a single organisation gets a pass-mark - the European Commission, whose Guidelines achieved 74%.

The Data61 document represents a plea for a law-free zone, so that the AI industry can develop unfettered. As a substitute for effective regulatory measures, it proposes a mere 'ethics framework', some (unenforceable) 'core principles', and a toolkit. Moreover, despite being one of the most recently published, the Data61 set that is being considered by the Australian Department of Industry has dismal coverage of the field, with only a 26% score. Among the 37 of the 50 Principles that are missing from the Data61 set are these (presumably 'non-core') requirements:

The 50 Principles have no authority, legal or otherwise. On the other hand, they're a consolidation of published sources that are assumed to be influential; and surely each of these examples of Data61 omissions is quite fundamental to managing public risk.

Conclusions

Benefits may be achievable with some forms of AI, if they are used in appropriate contexts, and the artefacts and systems they are built into are well-designed, well-tested in the real world before being relied upon, applied with care, and subjected to controls, audit and sanctions for non-compliance.

But the benefits will not be realised if Data61 and other would-be thought-leaders continue to exaggerate progress and offer excuses for avoiding regulatory measures. Unjustified claims, irresponsible designs and inadequate safeguards will turn the world, once again, against all things 'AI'.

The law stipulates requirements for 'due process' and 'procedural fairness'. But such bland terms don't capture the visceral reaction the public will have against dumb AI that makes decisions based on inferences for which no rational explanations are available, and which are incapable of investigation and justification.

A thin veneer of chatter about 'ethics', unenforceable international treaties and 'core principles' will not bring about responsible AI technology, responsible designs using AI, and responsible application of AI-based systems. Real action is needed to get irresponsible AI proponents, and irresponsible AI, under public control.


References

Clarke R. (2018) 'Centrelink's Big Data 'Robo-Debt' Fiasco of 2016-17' Revision of 14 January 2018, at http://www.rogerclarke.com/DV/CRD17.html

Clarke R. (2019) 'The OECD's AI Guidelines of 22 May 2019: Evaluation against a Consolidated Set of 50 Principles' Xamax Consultancy Pty Ltd, May 2019, at http://www.rogerclarke.com/EC/AI-OECD-Eval.html

Clarke R. (2019) 'Why the World Wants Controls over Artificial Intelligence' Computer Law & Security Review 35, 4 (Jul-Aug 2019) 423-433, PrePrint at http://www.rogerclarke.com/EC/AII.html

Clarke R. (2019) 'Principles and Business Processes for Responsible AI' Computer Law & Security Review 35, 4 (Jul-Aug 2019) 410-422, PrePrint at http://www.rogerclarke.com/EC/AIP.html

Clarke R. (2019) 'Regulatory Alternatives for AI' Computer Law & Security Review 35, 4 (Jul-Aug 2019) 398-409, PrePrint at http://www.rogerclarke.com/EC/AIR.html

DeptIndy (2019) 'Artificial Intelligence: Australia's Ethics Framework' Department of Industry, Innovation and Science, April 2019, at https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/supporting_documents/ArtificialIntelligenceethicsframeworkdiscussionpaper.pdf

EC (2019) 'Ethics Guidelines for Trustworthy AI' High-Level Expert Group on Artificial Intelligence, European Commission, April 2019, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419

McClure D. (2019) 'Our billion-dollar AI opportunity' Innovation Australia, 16 September 2019, at https://www.innovationaus.com/2019/09/Our-billion-dollar-AI-opportunity


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University. He is a past Chair of the Australian Privacy Foundation, and Company Secretary of the Internet Society in Australia.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 75 million in late 2024.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 24 September 2019 - Last Amended: 4 October 2019 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/AINoU.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2024   -    Privacy Policy