Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'AI Reconception'

AI and Robotics: The Threats, and A Reconception

Version of 11 May 2024

Outline for an LLM Seminar on 'AI, Law & Society'
at the ANU College of Law, on 15 May 2024

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2024

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at

This accompanying slide-set is at


The seminar commences with a review of the original conception of 'Artificial Intelligence', and its very rapid corruption from a "conjecture" to a fervent belief. Practical applications of AI have abandoned the 'grand challenge' that aspires to replicate human intelligence, and instead are inspired by it.

No single definition of 'AI' exists, and most of the hundreds on offer are unhelpful and even seriously misleading. The key characteristics evident in the literature are artefactual perception and cognition, goals, and action formulation to achieve those goals. Some authors add the implementation of that action, and the scope of this seminar accordingly extends to artefactual action in the real world, usually associated with robotics.

Attention is drawn to a common failure among AI researchers and practitioners to recognise that intelligence also involves second-order intellect or insight. Key features of it include appreciation that the formulation of goals is values-driven; that having an understanding of context is dependent on what humans call 'common sense'; that environmental awareness is essential, together with the means to detect environmental changes of relevance; and that it is essential to sustain ongoing re-evaluations of values and adaptations of goals.

The large number of misconceptions embodied in the theory and practice of AI results in 'progress' in the field -- by which proponents and entrepreneurs mean the promise, prospect or at least hope of positive impacts, and particularly profits -- is accompanied by both substantial risk and substantial actual collateral damage, whether or not the positive impacts are achieved. The categories of threat are distilled down to artefact autonomy, inappropriate assumptions about data and about inferencing processes, the opaqueness of inferencing processes, and irresponsibility.

A series of steps is then taken to develop an alternative conception of the endeavour. The first step is to deprecate the idea of 'artificial', and to shift the focus to 'artefactual intelligence'. The second step is recognise the need for Artefact Intelligence to be designed to work for and with people, resulting in the notion of 'Complementary Artefactual Intelligence (CAI)'. The third step is to lift the ambition much higher, and combine CAI with Human Intelligence to deliver something superior to each of them: 'Augmented Intelligence'. These three steps alone are capable of delivering great benefits, facilitating the avoidance of many threats and the management of residual risks.

The fourth step extends beyond the intellectual realm into the physical. Inferences and decisions give rise to actions. Increasing proportions of artefacts are capable of direct action on the world. In the same way that Artefact Intelligence is most valuable when it is designed to complement Human Intelligence, Artefact Actuators can be developed with the aim in mind to complement Human Effectors. The fifth and final step in the articulation of the alternative re-conception is recognition of the significance of Complementary Artefact Capability (CAC) that dovetails with Human Capability, resulting in Augmented Capability (AC).

Reference List

Clarke R. (2005) 'Human-Artefact Hybridisation: Forms and Consequences' Invited Presentation to the Ars Electronica 2005 Symposium on Hybrid - Living in Paradox, Linz, Austria, September 2005, PrePrint at

Clarke R. (2011) 'Cyborg Rights' IEEE Technology and Society 30, 3 (Fall 2011) 49-57, PrePrint at

Clarke R. (2014a) 'Understanding the Drone Epidemic' Computer Law & Security Review 30, 3 (June 2014) 230-246, PrePrint at

Clarke R. (2014b) 'What Drones Inherit from Their Ancestors' Computer Law & Security Review 30, 3 (June 2014) 247-262, PrePrint at

Clarke R. & Bennett Moses L. (2014c) 'The Regulation of Civilian Drones' Impacts on Public Safety' Computer Law & Security Review 30, 3 (June 2014) 263-285, PrePrint at

Clarke R. (2014d) 'The Regulation of Civilian Drones' Impacts on Behavioural Privacy' Computer Law & Security Review 30, 3 (June 2014) 286-305, PrePrint at

Clarke R. (2016) 'Big Data, Big Risks' Information Systems Journal 26, 1 (January 2016) 77-90, PrePrint at

Clarke R. & Taylor K. (2018) 'Towards Responsible Data Analytics: A Process Approach' Proc. Bled eConference, 17-20 June 2018, PrePrint at

Clarke R. (2019a)  'Why the World Wants Controls over Artificial Intelligence'  Computer Law & Security Review 35, 4 (Jul-Aug 2019) 423-433, at, PrePrint at

Clarke R. (2019b)  'Principles and Business Processes for Responsible AI' Computer Law & Security Review 35, 4 (Jul-Aug 2019) 410-422, at , PrePrint at

Clarke R. (2019c)  'Regulatory Alternatives for AI'  Computer Law & Security Review 35, 4 (Jul-Aug 2019) 398-409, at , PrePrint at

Clarke R. (2021) 'Responsible Application of Artificial Intelligence to Surveillance: What Prospects? Information Polity 27, 2 (Jun 2022) 175-191, PrePrint at

Clarke R. (2022a) 'Responsible Application of Artificial Intelligence to Surveillance: What Prospects?' Information Polity 27, 2 (Jun 2022) 175-191, Special Issue on 'Questioning Modern Surveillance Technologies', PrePrint at

Clarke R. (2023) 'The Re-Conception of AI: Beyond Artificial, and Beyond Intelligence' IEEE Trans. Techno. & Soc. 4, 1 (March 2023) 24-33, PrePrint at

Dreyfus H.L. (1972) 'What Computers Can't Do' MIT Press, 1972; Revised edition as 'What Computers Still Can't Do', 1992

Kurzweil R. (2005) 'The singularity is near' Viking Books, 2005

McCarthy J., Minsky M.L., Rochester N. & Shannon C.E. (1955) 'A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence' Reprinted in AI Magazine 27, 4 (2006), at

Simon H.A. (1960) 'The shape of automation' Reprinted in various forms, 1960, 1965, quoted in Weizenbaum J. (1976), pp. 244--245

Weizenbaum J. (1976) 'Computer power and human reason' W.H. Freeman & Co., 1976

Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professorial Fellow associated with UNSW Law & Justice, and a Visiting Professor in the Research School of Computer Science at the Australian National University.

xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 11 May 2024 - Last Amended: 11 May 2024 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy