Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Responsible AI'

Responsible AI - Series Overview

Prepared for submission as a Special Section in Computer Law & Security Review

Review Version of 17 March 2019

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2019

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/EC/AIA.html


Roger Clarke is a frequent contributor to this journal. A previous series of four articles on drones, in CLSR 30, 3 (June 2014), has attracted considerable interest, and significant numbers of downloads and citations. In this new series, he addresses the question of how executives and policy-makers can responsibly address the threats that come with the promise of Artificial Intelligence (AI).

Organisations across the private and public sectors are looking to use AI techniques not only to draw inferences, but also to make decisions and take action, and even to do so autonomously. This is despite the absence of any means of programming values into technologies and artefacts, and the obscurity of the rationale underlying inferencing using contemporary forms of AI.

To what extent are the various forms of AI really suitable for real-world applications? If AI is inscrutable, can executives satisfy their board-members that the organisation is being managed appropriately? Beyond operational management, can compliance risks be managed? And can important relationships with customers, staff, suppliers and the public be suitably protected?

Ill-advised uses of AI need to be identified in advance and nipped in the bud, to avoid harm to important values, both corporate and social. Organisations need to extract the achievable benefits from advanced technologies rather than dreaming dangerous dreams.

Policy-makers, meanwhile, need to devise and implement regulatory arrangements that will ensure that the inevitable lapses in the behaviour of corporations and government agencies do not result in serious, and perhaps irreparable, harm to personal and social values - while avoiding undue constraints on the benefits that AI technology can deliver.

The first article examines AI and the issues it gives rise to. The second identifies ways in which organisations can ensure that they behave responsibly. The third assesses alternative approaches whereby AI can be subjected to appropriate regulation, and presents a proposal for a co-regulatory regime.

1. Why the World Wants Controls over Artificial Intelligence

The notion of AI may be historically interesting, but it is fuzzy, contested, and a barrier to progress. Reviews of relevant concepts and of some key exemplar technologies enable the identification and categorisation of key public concerns about AI. This provides a basis for proposals in relation to organisational action and regulatory design, which are presented in the two subsequent articles in the series.

2. Principles and Business Processes for Responsible AI

Organisations apply AI in order to draw inferences, make decisions, and take actions. In some cases, the resulting artefacts and systems are intended to have a substantial degree of autonomy. This gives rise to risks both to the organisation and to other stakeholders. This article considers how risk assessment techniques can be applied in order to provide a basis for managing the risks facing not only the organisation but also all other stakeholders. The process is underpinned by consolidating a comprehensive set of Principles that draws on the many, very partial proposals that have been published in recent years.

3. Regulatory Alternatives for AI

Self-regulation can contribute to the management of risks arising from AI, but by itself it is incapable of satisfying the public need. This article discusses the nature of regulation, and defines a threshold test for regulatory intervention. After considering existing laws and the various forms that regulation can take, a co-regulatory framework for AI is proposed.


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 20 February 2019 - Last Amended: 17 March 2019 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/AIA.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy