Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Responsible AI - Part 1'

Why the World Wants Controls over Artificial Intelligence

Final Version of 24 April 2019

Computer Law & Security Review 35, 4 (2019) 423-433

This is the first article in a series on 'Responsible AI'.
The other two are on Self-Regulation and Co-Regulation

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2018-19

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at


This article reviews the nature, the current state and a possible future of Artificial Intelligence (AI). AI is described both in the abstract and in four forms that are currently evident not only in laboratories but also in real-world applications. Clarity about the public's concerns is sought by articulating the threats that are inherent within AI. It is proposed that AI needs to be re-conceived as 'complementary artefact intelligence', and that the robotics notion of 'machines that think' needs to give way to the idea of 'intellectics', with the focus on 'computers that do'. This article lays a foundation for two further articles on how organisations can adopt a responsible approach to AI, and how an appropriate regulatory scheme for AI can be structured.


1. Introduction

Since the conception of Artificial Intelligence (AI) in the early post-World War II period, there have been sporadic surges in marketing fervour for various flavours of it. Its aura of mystery and confusion, compounded by a considerable amount of over-claiming, has stimulated periods of public enthusiasm interspersed with 'winters of discontent'.

Several forms of AI are currently being vigorously promoted, and are attracting attention from investors, user organisations, the media and the public. However, along with their promises, they bring major challenges in relation to understandability, control and auditability.

To date, public understanding of AI has been marketer-driven and superficial. This is a perfect breeding-ground for mood-swings, between euphoric and luddite. Many people are wary about AI inherently undermining accountability and stimulating the abandonment of rationality. Cautionary voices have included cosmologist Stephen Hawking (Cellan-Jones 2014), Microsoft billionaire Bill Gates (Mack 2015), and technology entrepreneur Elon Musk (Sulleyman 2017).

Meanwhile, less prominent people are suffering from unreasonable inferences, decisions and actions by AI-based artefacts and systems. One form of harm is unfair and effectively unappealable decisions by government agencies about benefits and penalties, by financiers about credit-granting, by insurers, and by employers. In addition, instances are accumulating of physical harm arising from autonomous acts by artefacts such as cars and aircraft. Aggrieved victims are likely to strike back against the technologies and their purveyors.

This article is addressed to a readership that is technically literate, socially aware, and concerned with technology policy and law. It accordingly assumes moderate familiarity with the topic. It commences with brief overviews of AI and of several key forms of it. The aim is to enable delineation of the threats that accompany AI's promises, and that give rise to the need for responsibility to be shown in relation to its development and deployment.

2. Artificial Intelligence

The term Artificial Intelligence (AI) was coined in 1955 in a proposal for the 1956 Dartmouth Summer Research Project in Automata (McCarthy et al. 1955). The proposal was based on "the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it". Histories of AI (e.g. Russell & Norvig 2009, pp. 16-28, Boden 2016, pp.1-20) identify multiple strands, but also multiple re-visits to much the same territory, and a considerable degree of creative chaos.

Many attempts have been made to distill out the sense in which the juxtaposition of the two words is to be understood. Conventionally (Albus 1991, Russell & Norvig 2003, McCarthy 2007):

Intelligence is exhibited by an artefact if it (1) evidences perception and cognition of relevant aspects of its environment, (2) has goals, and (3) formulates actions towards the achievement of those goals

The word 'artificial' implies 'artefactual' or 'human-made'. Its conjunction with 'intelligence' has imbued it with competing ideas about whether the yardstick is 'equivalent to human', 'different from human' or 'superior to human'.

The over-enthusiasm that characterises the promotion of AI has deep roots. Simon (1960) averred that "Within the very near future - much less than twenty-five years - we shall have the technical capability of substituting machines for any and all human functions in organisations. ... Duplicating the problem-solving and information-handling capabilities of the brain is not far off; it would be surprising if it were not accomplished within the next decade". Over 35 years later, with his predictions abundantly demonstrated as being fanciful, Simon nonetheless maintained his position, e.g. "the hypothesis is that a physical symbol system [of a particular kind] has the necessary and sufficient means for general intelligent action" (Simon 1996, p. 23 - but expressed in similar terms from the late 1950s, in 1969, and through the 1970s), and "Human beings, viewed as behaving systems, are quite simple" (p. 53). Simon acknowledged "the ambiguity and conflict of goals in societal planning" (p. 140), but his subsequent analysis of complexity (pp. 169-216) considered only a very limited sub-set of the relevant dimensions. Much the same dubious assertions can be found in, for example, Kurzweil (2005): "by the end of the 2020s" computers will have "intelligence indistinguishable to biological humans" (p.25), and in self-promotional documents of the current decade.

AI has offered a long litany of promises, many of which have been repeated multiple times, on a cyclical basis. Each time, proponents have spoken and written excitedly about prospective technologies, using descriptions that not merely verged into the mystical, but even crossed the border into the realms of alchemy. The exaggerations have resulted in under-delivery and a cyclical 'boom and bust' pattern, with research funding being sometimes very easy to obtain, and sometimes very difficult, depending on whether the focus at the time was on the hyperbole or on the very low delivery-rate against promises.

Part of AI's image-problem is that successes deriving from what began as AI research have shed the name. In a quotation widely-attributed to John McCarthy, "As soon as it works, no-one calls it AI anymore". For example, pattern recognition, variously within text, speech and two-dimensional imagery, has made a great deal of progress, and achieved application in multiple fields, as diverse as dictation, optical character recognition (OCR), automated number-plate recognition (ANPR), and object and facial recognition. Game-playing, particularly of chess and go, has surpassed human-expert levels, and provided entertainment value and spin-offs. It is as yet unclear, however, whether AI-based game-playing has provided the breakthroughs towards posthumanism that its proponents appeared to be claiming for it.

Organisations, in business and government alike, need to identify AI technologies that have practical value, and devise ways to apply them so as to that achieve benefits without incurring disproportionate disbenefits or giving rise to unjustified risks. A key feature of AI successes to date appears to be that, even where the technology or its application is complex, it is understandable by people with appropriate technical background, i.e. it is not magic and is not presented as magic, and its applications are auditable. AI technologies that have been effective have been able to be piloted and empirically tested in real-world contexts, under sufficiently controlled conditions that the risks have been able to be identified, assessed and then managed.

The scope addressed in this article is very broad, in terms of both technologies and applications, but it excludes design and use for warfare or armed conflict. It is, however, intended to include applications to civil law enforcement and domestic national security, i.e. safeguards for the public, for infrastructure, and for public figures. The following section undertakes brief scans of several current technologies that are within the field of view.

3. AI Exemplars

AI's scope is broad, and contested. This section identifies several technologies that have current relevance. That relevance derives in part from claims of achievement of progress and benefits, and in part from media coverage resulting in awareness among organisations' executives and staff and the general public. In addition to achieving a level of adoption, each faces degrees of technical challenge, public scepticism and resistance.

The following sub-sections briefly review four AI technologies, with a view to enabling commonalities to emerge among the diversity of features.

3.1 Robotics

The two foundational elements of robotics are programmability, implying computational or symbol-manipulative capabilities that a designer can combine as desired (a robot is a computer); and mechanical capability, whereby inbuilt actuators influence its environment (a robot is a machine). A comprehensive design also requires sensors to acquire data from the robot's environment (Arkin 1998).

Robotics has built on its earlier successes in controlled environments such as the factory floor and the warehouse, and is now in direct contact with the public. Some applications are non-obvious, such as low-level control over the attitude, position and course of craft both on or in water and in the air. Others are more apparent. The last few years have seen a great deal of activity in relation to self-driving vehicles (Paden et al. 2016), variously on rails and otherwise, in controlled environments such as mines, quarries and dedicated tram, train and bus routes, and recently in more open environments. In addition, robotics has taken flight, in the form of drones (Clarke 2014a).

Many claims have been made recently about 'the Internet of Things' (IoT) and about systems comprising many small artefacts, such as 'smart houses' and 'smart cities'. For a consolidation and rationalisation of multiple such ideas into the notion of an 'eObject', see Manwaring & Clarke (2015). Many of the initiatives in this area are robotic in nature, in that they encompass all of sensors, computing and actuators. The appearance of robotic technologies in public spaces has attracted attention and rejuvenated concerns about their impacts and implications.

3.2 Cyborgisation

The term 'cyborg' was coined from 'cybernetic organism' to refer to a technologically enhanced human being, originally in the context of survival in extraterrestrial environments (Clynes & Kline 1960). Cyborgisation refers to the process of enhancing individual humans by technological means, such that a cyborg is a hybrid of a human and one or more artefacts (Mann & Niedzviecki 2001, Clarke 2005, Warwick 2014). Many forms of cyborg fall outside the field of AI, such as spectacles, implanted lenses, stents, inert hip-replacements and SCUBA gear. However, a proportion of the artefacts that are used to enhance humans include sensors, computational or programmatic 'intelligence', and one or more actuators. Examples include heart pacemakers (since 1958), cochlear implants (since the 1960s, and commercially since 1978), and some replacement legs for above-knee amputees, in that the artificial knee contains software to sustain balance within the joint.

Many such artefacts replace lost functionality, and are referred to as prosthetics. Others, which can be usefully referred to as orthotics, provide augmented or additional functionality (Clarke 2011). An example of an orthotic is augmented reality for firefighters, displaying building plans and providing object-recognition in their visual field. It was argued in Clarke (2014b) that use by drone pilots of instrument-based remote control, and particularly of first-person view (FPV) headsets, represent a form of orthotic cyborgisation.

Artefacts of these kinds are not commonly included in catalogues of AI technology. On the other hand, they have a great deal in common with it, and research in the field is emergent (Zhaohui et al. 2016). Substantial progress with medical implants (Bhunia et al. 2015) suggests that these technologies have the prospect of becoming a flash-point for public concerns, because they involve direct intervention with the human body.

3.3 Rule-Based Expert Systems

Computing applications for drawing inferences from data began with hard-wired, machine-level and assembler languages (1940-1960), but made great progress with higher-level, imperative languages (indicatively, 1960-1990), particularly those that enabled the coding of genuinely 'algorithmic' programs, such as ForTran (Formula Translator). This approach involves an implied problem that needs to be solved, and an explicit procedural solution to that problem.

During the 1980s, additional means of generating inferences became mainstream, which embody no explicit 'problem' or 'solution'. Rule-based expert systems involve the representation of human expertise as statements about relationships between 'antecedent' and 'consequent' variables, in the form 'if-then'. The relationships may be theoretically-based and/or empirically-derived, mere heuristics / 'rules of thumb', or just hunches. When software that embodies sets of rules is provided with data, it applies the rules to that data, and draws inferences (Giarratano & Riley 1998). Frequently-cited applications include decisions about an individual's eligibility for citizenship or credit-worthiness and about the legality or otherwise of an act or practice.

Unlike algorithmic or procedural approaches, rule-based expert systems embody no conception of either a problem or a solution. A rule-base merely describes a problem-domain in a form that enables inferences to be drawn from it Clarke 1991). In order to understand the rationale underlying an inference, a human needs access to the rules that were 'fired', and the data that gave rise to their invocation. This may or may not be supported by the software. Even if access is supported, this may or may enable human understanding of the rationale underlying the inference, and whether or not the inference is reasonable in the circumstances.

3.4 AI / ML / Neural Networks

AI research has delivered a further technique, which accords primacy to the data rather than the model, and has the effect of obscuring the model to such an extent that no humanly-understandable rationale exists for the inferences that are drawn. The relevant branch of AI is 'machine learning' (ML), and the most common technique in use is 'artificial neural networks'. The approach dates to the 1950s, but limited progress was made until sufficiently powerful processors were readily available, from the late 1980s.

Neural nets involve a set of nodes (each of which is analogous to the biological concept of a neuron), with connections or arcs among them, referred to as 'edges'. Each connection has a 'weight' associated with it. Each node performs some computation based on incoming data, and may as a result adapt its internal state, in particular the weight associated with each arc, and may pass output to one or more other nodes. A neural net has to be 'trained'. This is done by selecting a training method (or 'learning algorithm') and feeding a 'training-set' of data to the network in order to load up the initial set of weights on the connections between nodes.

Unlike previous techniques for developing software, neural networking approaches need not begin with active and careful modelling of a real-world problem-solution, problem or even problem-domain. Rather than comprising a set of entities and relationships that mirrors what the analyst has determined to be the key elements and processes of a real-world system, a neural network model may be merely lists of input variables and output variables (and, in the case of 'deep' networks, one or more levels of intermediary variables). To the extent that a model exists, in the sense of a representation of the real world, it is implicit rather than express. The weights imputed for each connection reflect the characteristics firstly of the training-set that was fed in, and secondly of the particular learning algorithm that was imposed on the training-set.

Enthusiasts see great prospects in neural network techniques, e.g. "There has been a number of stunning new results with deep-learning methods ... The kind of jump we are seeing in the accuracy of these systems is very rare indeed" (Markoff 2012). They claim that noisy and error-ridden data presents no problems, provided that there's enough of it. They also claim that the techniques have a wide range of application areas. Sceptics, on the other hand, perceive that the techniques' proponents overlook serious weaknesses (Marcus 2018), and in effect treat empiricism as entirely dominating theory. Combining these issues with questions about the selectivity, accuracy and compatibility of the data gives rise to considerable uncertainty about the techniques' degree of affinity with the real world circumstances to which they are applied.

Inferences drawn using neural networking inevitably reflect errors and biasses inherent in the implicit model, in the selection of real-world phenomena for which data was created, in the selection of the training-set, and in the particular learning algorithms used to develop the application. Means are necessary to assess the quality of the implicit model, of the data-set, of the data-item values, of the training-set and of the learning algorithm, and of the compatibility among them all, and to validate the inferences both logically and empirically. Unless and until those means are found, and are routinely applied, AI/ML and neural nets need to be regarded as unproven techniques that harbour considerable dangers to the interests of organisations and their stakeholders.

3.5 Commonalities among these AI Exemplars

The four AI technologies outlined here exhibit considerable differences, but also some commonalities. One important common factor is the lack of transparency about the means whereby inferences are drawn, decisions are made, and (in two cases) actions are taken. The fog may be so dense that no scope exists for human understanding of the process, and there may even be no rationale and no means of reconstructing one. Another common feature is intrusiveness into human affairs, in some cases by the very nature of the technology, and in others as a result of the contexts within which they are applied. Proponents of the technologies also make assumptions about the nature of the data on which they depend, often without checking that the assumptions are justified, and without meaningful consideration of the implications if they turn out to be wrong.

4. The Threats Inherent in AI

The characteristics of AI, and of the four mainstream forms outlined in the previous section, give rise to a wide array of serious concerns about AI's impacts and implications (e.g. Scherer 2016, esp. pp. 362-373, Yampolskiy & Spellchecker 2016, Duursma 2018). Many of the concerns may be keenly-felt, but are vague, such as the disruption of work-based income-distribution, the imposition of predestination on individuals, the dominance of collectivism over individualism, the undermining of human rights, the disruption of culture, the dominance of the powerful over the weak, and the risk of undermining the meaningfulness of human life.

The following is proposed as an expression of concern that has the capacity to provide guidance for responsible behaviour:

AI gives rise to errors of inference, of decision and of action, which arise from the more or less independent operation of artefacts, for which no rational explanations are available, and which may be incapable of investigation, correction and reparation

Even this expression requires unpacking, however, in order to identify problems that can be addressed by the crafting of safeguards. The following sections discuss five factors that underlie the above expression of the concerns about AI. The first consideration is the extent of human delegation to artefacts. This is followed by a consideration of assumptions about data, about the processes used to draw inferences from data, and about the opaqueness of those processes. The final factor examined is the failure to sheet home responsibilities to the entities involved in the AI industry supply chain.

4.1 Artefact Autonomy

The concept of 'automation' is concerned with the performance of a predetermined procedure, or response in predetermined ways to alternative stimuli. It is observable in humans, e.g. under hypnosis, and is designed-into many kinds of artefacts. The rather different notion of 'autonomy' means, in humans, the capacity for independent decision and action. Further, in some contexts, it also encompasses a claim to the right to exercise that capacity. It is associated with the notions of consciousness, sentience, self-awareness, free will and self-determination.

A common feature of the four AI technologies discussed earlier is that, to a much greater extent than in the past, software is drawing inferences, making decisions, and taking action. Put another way, artefacts are being imbued with a much greater degree of autonomy than was the case in the past.

Artefact autonomy may merely comprise a substantial repertoire of pre-programmed stimulus-response relationships. Alternatively, it may extend to the capacity for auto-adaptation of aspects of those relationships, or for the creation of new relationships. For example, where machine-learning is applied, the stimulus-response relationships change over time depending on the cases handled in the intervening period.

As a result of emergent artefact autonomy, humanity is in the process of delegating not to humans, but to human inventions. This gives rise to uncertainties whose nature is distinctly different from prior and well-trodden paths of human and organisational practice. A further relevant factor is that autonomous artefacts have a high likelihood of stimulating repugnance among a proportion of the public, and hence giving rise to luddite behaviour.

In humans, autonomy is best approached as a layered phenomenon. Each of us performs many actions in a subliminal manner. For example, our eye and ear receptors function without us ever being particularly aware of them, and several layers of our neural systems process the signals in order to offer us cognition, that is to say awareness and understanding, of the world around us.

A layered approach is applicable to artefacts as well. Aircraft generally, including drones, may have layers of behaviour that occur autonomously, without pilot action or even awareness. Maintenance of the aircraft's 'attitude' (orientation to the gravity-relative vertical and horizontal), and angle to the wind-direction, may, from the pilot's viewpoint, simply happen. At a higher level of delegation, the aircraft may adjust the aircraft's flight controls in order to sustain progress towards a human-pre-determined or human-amended destination, or in the case of rotorcraft, to maintain the vehicle's location relative to the earth's surface. A higher-order autonomous function is inflight manoeuvring to avoid collisions. At a yet higher level, some aircraft can perform take-off and/or landing autonomously, and some drones that lose contact with their pilot can decide when and where to land. To date, high-order activities that are seldom if ever autonomous include decisions about the mission objective and when to take off, and adjustments to the objective and destination.

Artefact autonomy can be absolute, but is more commonly qualified, in that a human - or perhaps some superordinate artefact - can exercise some degree of control over the artefact's behaviour. Table 1 draws on and simplifies various models that provide structure to that relationship, including Armstrong (2010, p.14), Clarke (2014a, Table 1) and Sheridan & Verplank (1978, Table 8.2, pp. 8-17-8.19) as interpreted by Robertson et al. (2019, Table 1).

Table 1: Degrees of Autonomy

Function of the Artefact
Function of the Controller
Analyse Options
Decide among Options
Advise re Options
Decide among Options
Recommend Action
Approve/Reject Recommended Action
Notify an Impending Action
Override/Veto an Impending Action
Act and Inform
Interrupt/Suspend/Cancel an Action

It is readily argued that the degree of autonomy granted to artefacts needs to reflect the layer at which the particular function is operating. The sequence in which the alternatives are presented in Table 1 corresponds with those layers. At the lowest level (7), the rapidity with which analysis, decision and action need to be undertaken may preclude conscious human involvement. At the other extreme (1), artefacts lack the capability to deal with the complexities, ambiguities, variability, fluidity, value-content and value-conflicts inherent in important real-world decision-making (Dreyfus 1992). There are circumstances (5-6) in which it is appropriate to enable autonomous behaviour by artefacts subject to human interruption or override. In other circumstances (2-4), the appropriate approach is for the artefact to provide decision support to humans, through analysis, advice and/or recommendation.

There appears to be de facto public acceptance of the notion of delegation of low-level, real-time functions to artefacts. Even at that level, however, AI is adding a further level of mystery. It remains to be seen whether the public will continue to accept inexplicable events resulting in aircraft and driverless-vehicle incidents. Following the crash of a second Boeing 737 Max in early 2019, the US President voiced a popular sentiment, to the effect that pilots should be professionals who can easily and quickly take control of their aircraft. That portends an edict that robot autonomy, at least for passenger aircraft, will be limited to revocable autonomy (5-6), with layer 7 prohibited. In respect of less structured decisions, there seems little prospect of public acceptance even of revocable automated decision-making.

IEEE, even though it is one of the most relevant professional associations in the field, made no meaningful attempt to address these issues for decades. It is currently endeavouring to do so. It commenced with a discussion paper (IEEE 2017) which avoids the term 'artificial', and prefers the term 'Autonomous and Intelligent Systems (A/IS)'.

4.2 Assumptions about Data

An artefact's awareness of its environment depends on data variously provided to it and acquired by its sensors. Any deficiencies in the quality of that data undermine the appropriateness of the artefact's inferences, decisions and actions.

Data quality is a function of a large set of factors (Wang & Strong 1996, Clarke 2016b). Beyond validity, accuracy, precision, timeliness, completeness, and general and specific relevance, the correspondence of the data with the real-world phenomena that the process assumes it to represent depends on appropriate identity association, attribute association and attribute signification.

Where data is drawn from multiple sources, definitional and quality consistency among those sources is almost inevitably a limiting factor, yet it is seldom considered (Widom 1995). Data scrubbing (or 'cleansing') may be applied; but this is a dark art, and most techniques generate some errors in the process of correcting others (Mueller & Freytag 2003). Further, attention has already been drawn to the often-expressed claim that, with sufficiently large volumes of data, the impacts of low data, matching and scrubbing quality are automatically smooth themselves out. This is a justifiable claim in specific circumstances, but in most cases is a magical incantation that does not hold up under cross-examination (boyd & Crawford 2012).

4.3 Assumptions about the Inferencing Process

Endeavours are being made to apply robotics outside the controlled environments in which they have enjoyed success (factories, warehouses, and thinly human-populated mining sites) to contexts in which there is much more variability and unpredictability, and much less structure (such as public roads, households, and human care applications).

In the case of the flying robots popularly called drones, considerable challenges confront the design and deployment even of a generally applicable process for safe landing when communications are lost with the pilot, let alone collision-detection capabilities, far less collision-avoidance functionality. Yet these are processes that are expectations and even legal obligations in current, human-operated activities, and hence pre-conditions for AI-based substitutes.

Where AI technologies depend on the drawing of inferences from data, confidence is needed that, in each case, and before reliance is placed upon it, the inferencing process's applicability to the particular problem-category or problem-domain has been demonstrated - preferably both theoretically and empirically.

A further issue is the suitability of the available data as input to the particular inferencing process. A great deal of data is on nominal scales (which merely distinguishes categories). Some is on ordinal scales (implying a structured relationship between categories, such as 'good, better, best'), and some is on cardinal scales (with equal intervals between the categories, such as temperature expressed in degrees Celsius). Only a limited range of analytical tools is available for data on such scales. Most of the powerful statistical tools applied by data analysts assume that all of the data is on ratio scales (which feature equal intervals and a natural zero, such as degrees Kelvin). Many analyses abuse the rules of statistics by applying techniques inappropriately. Mixed-mode data (i.e. where the various items of data are on different kinds of scale) is particularly challenging to deal with. Further, most tools cannot cope with missing values, and hence more or less arbitrary values need to be invented. Given the problems that need to be overcome, it is highly inadvisable for inferencing mechanisms to be relied upon as a basis for decision-making that has material consequences, unless and until their applicability to the data in question has been subjected to independent analysis and certification.

Of particular concern are assertions that empirical correlation unguided by theory is enough, and that rational explanation is a luxury that the world needs to learn to live without. These cavalier claims are published not only by excitable journalists but also by influential academics (Anderson 2008, LaValle et al. 2011, Mayer-Schoenbeger & Cukier 2013).

4.4 Opaqueness of the Inferencing Process

Some forms of AI, importantly including neural networking, are largely empirical rather than based on an established theory. Moreover, where they embody any form of machine learning, their performance may vary over time even though the context appears unchanged. Some other AI technologies are built on a stronger theoretical base, but are complex and multi-layered. These characteristics make it difficult for humans to grasp how AI does what it does, and to explain and understand the inferences it draws, the decisions it makes, and the actions it takes (Burrell 2016, Knight 2017).

This lack of transparency gives rise to many further features, summarised in Table 2. Not all of thesemay be evident in any given situation, but all of them may have serious consequences for individuals and organisations:

Table 2: Implications of the Lack of Process Transparency

Where decision transparency is absent, the accountability of organisations for their decisions is undermined. Where entities are secure in the knowledge that blame cannot be sheeted home to them, irresponsible behaviour is inevitable. Under threat are the established principles of evaluation, fairness, proportionality, evidence-based decision-making, and the capacity to challenge decisions (APF 2013).

There is increasing public pressure for explanations to be provided for decisions that are adverse to the interests of individuals and of small business. The responsibility of decision-makers to provide explanations has always been implied by the principles of natural justice and procedural fairness. In many jurisdictions, administrative law imposes specific requirements on government agencies. In the private sector as well, organisations are gradually becoming subject to legal provisions. In the EU, since mid-2018, access must be provided to "meaningful information about the logic involved", "at least in" the case of automated decisions (GDPR 2018, Articles 13.2(f), 14.2(g) and 15.1(h), Selbst & Powles 2017). The scope and effectiveness of these provisions is as yet unclear. One interpretation is that "the [European Court of Justice] has ... made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent ... [Hence] a new data protection right, the 'right to reasonable inferences', is needed" (Wachter & Mittelstadt 2019).

4.5 Irresponsibility

A further factor is at work in undermining accountability. There has to date been inadequate discrimination among the various stages of the supply-chain from laboratory experiment to deployment in the field. This leads to a failure to assign responsibilities to the various categories of entities.

In Table 3, the AI supply-chain is depicted as a succession of phases, from laboratory experiment to deployment in the field. Distinctions are drawn among technology, artefacts that embody the technology, systems that incorporate the artefacts, and applications of those systems. Appropriate responsibilities can then be assigned to, successively, researchers, inventors, innovators, purveyors, and users. Each of these categories of entity bears moral responsibility for disbenefits arising from AI. Further, each of these categories of entity needs to be subject to legal constraints and obligations, commensurate with the roles that they play.

Table 3: Entities with Responsibilities in Relation to AI

AI Technology
AI-Based Artefacts
R&D Engineers
AI-Based Systems
Installed AI-Based Systems

This section has sought to unbundle the many aspects of AI that embody threats, and that are at the heart of the public's demands for controls over AI. The following two articles in the series examine how organisations can exercise responsibility in the consideration of AI, and how a regulatory regime can be structured to ensure effective safeguards. The final section in this paper suggests that reconception of the field can be instrumental in assisting in the achievement of responsibility in relation to technology, artefacts and systems.

5. Rethinking AI

A major contributor to AI's problems has been the diverse and often conflicting conceptions of what it is, and what it is trying to achieve. After 65 years of confusion, it is high time that the key ideas were disentangled, and an interpretation adopted that can assist user organisations to appreciate the nature of the technologies, and then analyse their potential contributions and downsides.

This section suggests two conceptualisation that are intended to assist in understanding and addressing the technical, acceptance and adoption challenges.

5.1 Complementary Artefact Intelligence

If the intelligence that AI delivers is intended to be 'equivalent to human', some doubt has to be expressed about the value of the exercise. It is far from clear that there was a need for yet more human intelligence in 1955, when there were 2.8 billion people, let alone now, when there are over 7 billion of us, many under-employed and likely to remain so. If, on the other hand, the intelligence sought is in some way 'superior-to-human', the question arises as to how superiority is to be measured. For example, is playing a game better than human experts necessarily a useful measure? There is also a conundrum embedded in this approach: if artificial intelligence is superior to human intelligence, can human intelligence reliably define what 'superior-to-human' intelligence means?

An alternative approach may better describe what humankind needs. An idea that is traceable at least to Wyndham (1932) is that " ... man and machine are natural complements: They assist one another". I argued in Clarke (1989) that there was a need to "deflect the focus ... toward the concepts of 'complementary intelligence' and 'silicon workmates' ... to complement human strengths and weaknesses, rather than to compete with them". Again, in Clarke (1993), reprised in Clarke (2014b), I reasoned that: "Because robot and human capabilities differ, for the foreseeable future at least, each will have specific comparative advantages. Information technologists must delineate the relationship between robots and people by applying the concept of decision structuredness to blend computer-based and human elements advantageously".

Adopting this approach, AI needs to be re-conceived such that its purpose is to extend human capabilities, by working with people and other artefacts. The following operational definition is proposed:

Complementary Artefact Intelligence:
(1) does things well that humans do poorly or cannot do at all
(2) performs functions within systems that include both humans and artefacts; and
(3) interfaces effectively, efficiently and adaptably with both humans and artefacts

A term and concept related to, but different from, 'complementary intelligence' is 'augmented intelligence' (Engelbart 1962, Mann 2001, but currently enjoying a revival). A fuller description of the concept that this section is addressing is as follows:

Complementary Artefact Intelligence refers to forms of Artefact Intelligence that are complementary to Human Intelligence, and that work with Human Intelligence synergistically, thereby producing a blend of human and artefact intelligence to which the term Augmented Intelligence is applied

An alternative, imprecise but cute depiction is:

Human Intelligence
+ Complementary Artefact Intelligence
= Augmented Intelligence

An important category of Complementary Artefact Intelligence is the use of negative-feedback mechanisms to achieve automated equilibration within human-made systems. A longstanding example is the maintenance of ship trim and stability by means of hull shape and careful weight distribution, including ballast. A more commonly celebrated instance is Watts' fly-ball governor for regulating the pressure in a boiler. Of more recent origin are schemes to achieve real-time control over the orientation of craft floating in fluids, and maintenance of their location or path. There are successful applications to deep-water oil-rigs, underwater craft, and aircraft both with and without pilots on board. The notion is exemplified by the distinction drawn in Table 1 above between decision support systems (DSS), which are designed to assist humans make decisions, and decision systems (DS), whose purpose is to make the decisions without human involvement. MIT Media Lab's Joichi Ito has used the term 'extended intelligence' in a manner that links the notions of complementary artefact intelligence, augmented intelligence and responsible AI (Simonite 2018).

There are circumstances in which computer-based systems have clear advantages over humans, e.g. where significant computation is involved, and reliability, accuracy, and speed of inferencing, decision-making and/or action-taking are important. A pre-condition is, however, that a satisfactory structured process must exist. An alternative pre-condition may emerge, but is contentious. Some purely empirical techniques, and perhaps even heuristics ('rules of thumb'), may achieve widespread acceptance, e.g. if they are well-demonstrated to be more effective than either theory-driven approaches or human-performed decision-making.

Computer-based systems may have further advantages in relation to cost, and in relation to what in military contexts are referred to as "dull, dirty, or dangerous missions". Even where such superiority can be demonstrated, however, the need exists to shift discussion away from 'AI' to complementary intelligence, to technologies that augment human capabilities, and to systems that feature collaboration between humans and artefacts.

I contend that the use of the Complementary Artefact Intelligence notion can assist organisations in their efforts to distinguish uses of AI that have prospects for adoption, for the generation of net benefits, for the management of disbenefits, and for the achievement of public acceptability.

5.2 Intellectics

Robotics began with machines (in the sense of mechanical apparatus) being enhanced with computational elements and software. However, the emphasis has been shifting. I contend that the conception now needs to be inverted, and the field regarded instead as computers enhanced with sensors and actuators, enabling computational processes to sense the world and act directly on it. Rather than 'machines that think', the focus needs to be on 'computers that do'. The term 'intellectics' is a useful means of encapsulating that switch in emphasis.

The term 'intellectics' has been previously used in a related but somewhat different manner by Wolfgang Bibel, originally in German (1980, 1989). Bibel was referring to the combination of Artificial Intelligence, Cognitive Science and associated disciplines, using the notion of the human intellect as the integrating element. Bibel's sense of the term has gained limited currency, with only a few mentions in the literature and only a few authors citing the relevant papers. The sense in which I use the term here is as follows:

Intellectics refers to a context in which artefacts go beyond merely drawing inferences from data, in that they take autonomous action in the real world

In Table 1, decision systems were contrasted with decision support systems on the basis of the artefact's degree of autonomy. Table 4 identifies the forms that intellectics may take, and the threshold test to apply in order to identify it.

Table 4: Forms of Intellectics

Full Artefact Autonomy
An artefact makes a decision, and takes action in the real world to give effect to that decision, without an opportunity for a human to prevent the action being taken

Revocable Artefact Autonomy
An artefact makes a decision, and informs a human controller that the action has been taken, and the human has the opportunity and the capacity to interrupt the action

Overridable Artefact Autonomy
An artefact makes a decision, and informs a human controller that the action will be taken unless the human exercises their power to veto it, and the human has the opportunity and capacity to prevent the action


The Threshold Test
An artefact communicates a recommended action to a human. If the artefact cannot proceed with the action unless the human exercises their power to accept the recommendation, then the artefact does not have autonomy

The effect of implementing Intellectics is to at least reduce the moderating effect of humans in the decision-loop, and even to remove that effect entirely. Applying the notion of Intellectics has the benefit of bringing into much stronger focus the importance of assuring legitimacy of the data, of the inferencing technique, and of the inferences, decisions and actions.

In the case of inferencing based on neural networks, for example, major challenges that have to be satisfactorily addressed include the choice of learning algorithm, the availability and choice of sufficient training data, the quality of the training data, the significance of and the approaches adopted to data scrubbing and to empty cells within the training data, and the quality of the data to which the neural network is then applied (Clarke 2016a, 2016b).

6. Conclusions

This article has outlined AI, both in the abstract and through four exemplar technologies. That has enabled clarification of the threats inherent in AI, thereby articulating the vague but intense public concerns about the phenomenon.

This article has also proposed that the unserviceable notion of AI should be replaced by the notion of 'complementary artefact intelligence', and that the notion of robotics ('machines that think') is now much less useful than that of 'intellectics' ('computers that do'). In the near future, it may be possible to continue discussions using those terms. Currently, however, the mainstream discussion is about 'AI', and the further two articles in this series reflect that norm.

Sensor-computer-actuator packages are now generating a strong impulse for action to be taken in and on the real world, at the very least communicating a recommendation to a human, but sometimes generating a default-decision that is subject to being overridden or interrupted by a human, and even acting autonomously based on the inferences that software has drawn.

A power-shift towards artefacts is of enormous significance for humankind. It is also, however, a power-shift away from individuals and towards the mostly large and already-powerful organisations that control AI-based artefacts. Substantial pushback from the public needs to be anticipated. Existing regulatory arrangements need to be reviewed in light of the risks arising from AI. If adequate safeguards do not exist, new regulatory obligations will need to be imposed on organisations.

This article has identified a wide range of reasons why responsible behaviour by organisations in relation to AI is vital to the future for individuals, society and even humankind as a whole. The next article in the series examines how organisations can adapt their business processes, and apply a body of principles, in order to act responsibly in relation to AI technologies and AI-based artefacts and systems. The third article then addresses the question of how a regulatory regime can be structured, in order to encourage, and enforce, appropriate behaviour by all organisations.


Albus J. S. (1991) 'Outline for a theory of intelligence' IEEE Trans. Systems, Man and Cybernetics 21, 3 (1991) 473-509, at

Anderson C. (2008) 'The End of Theory: The Data Deluge Makes the Scientific Method Obsolete' Wired Magazine 16:07, 23 June 2008, at

APF (2013) 'Meta-Principles for Privacy Protection' Australian Privacy Foundation, March 2013, at

Arkin R.C. (1998) 'Behavior-based Robotics' MIT Press, 1998

Armstrong A.J. (2010) `Development of a Methodology for Deriving Safety Metrics for UAV Operational Safety Performance Measurement' Report , Master of Science in Safety Critical Systems Engineering, Department of Computer Science, York University, January 2010, at

Bennett Moses L. (2011) 'Agents of Change: How the Law Copes with Technological Change' Griffith Law Review 20, 4 (2011) 764-794, at

Bhunia S., Majerus S.J.A. & Sawan M. (Eds.) (2015) 'Implantable Biomedical Microsystems: Design Principles and Applications' ScienceDirect, 2015

Bibel W. (1980) ''Intellektik' statt 'KI' -- Ein ernstgemeinter Vorschlag' Rundbrief der Fachgruppe Kuenstliche Intelligenz in der Gesellschaft fuer Informatik, 22, 15-16 December 1980

Bibel W. (1989) 'The Technological Change of Reality: Opportunities and Dangers' AI & Society 3, 2 (April 1989) 117-132

Boden M. (2016) 'AI: Its Nature and Future' Oxford University Press, 2016

boyd D. & Crawford K. (2012) 'Critical Questions for Big Data' Information, Communication & Society, 15, 5 (June 2012) 662-679, at

Burrell J. (2016) How the machine 'thinks': Understanding opacity in machine learning algorithms' Big Data & Society 3, 1 (January-June 2016) 1-12

Cellan-Jones R. (2014) 'Stephen Hawking warns artificial intelligence could end mankind' BBC News, 2 December 2014, at

Chen Y. & Cheung A.S.Y. (2017). 'The Transparent Self Under Big Data Profiling: Privacy and Chinese Legislation on the Social Credit System, The Journal of Comparative Law 12, 2 (June 2017) 356-378, at

Clarke R. (1989) 'Knowledge-Based Expert Systems: Risk Factors and Potentially Profitable Application Area', Xamax Consultancy Pty Ltd, January 1989, at

Clarke R. (1991) 'A Contingency Approach to the Application Software Generations' Database 22, 3 (Summer 1991) 23-34, PrePrint at

Clarke R. (1993) 'Asimov's Laws of Robotics: Implications for Information Technology' in two parts, in IEEE Computer 26,12 (December 1993) 53-61, and 27,1 (January 1994) 57-66, at

Clarke R. (2005) 'Human-Artefact Hybridisation: Forms and Consequences' Proc. Ars Electronica 2005 Symposium on Hybrid - Living in Paradox, Linz, Austria, 2-3 September 2005, PrePrint at

Clarke R. (2011) 'Cyborg Rights' IEEE Technology and Society 30, 3 (Fall 2011) 49-57, at

Clarke R. (2014a) 'Understanding the Drone Epidemic' Computer Law & Security Review 30, 3 (June 2014) 230-246, PrePrint at

Clarke R. (2014b) 'What Drones Inherit from Their Ancestors' Computer Law & Security Review 30, 3 (June 2014) 247-262, PrePrint at

Clarke R. (2016a) 'Big Data, Big Risks' Information Systems Journal 26, 1 (January 2016) 77-90, PrePrint at

Clarke R. (2016b) 'Quality Assurance for Security Applications of Big Data' Proc. EISIC'16, Uppsala, 17-19 August 2016, PrePrint at

Clarke R. (2018) 'Centrelink's Big Data 'Robo-Debt' Fiasco of 2016-17' Xamax Consultancy Pty Ltd, January 2018, at

Clynes M.E. & Kline N.S. (1960) 'Cyborgs and Space' Astronautics, September 1960, pp. 26-27 and 74-75; reprinted in Gray, Mentor, and Figueroa-Sarriera, eds. 'The Cyborg Handbook' New York: Routledge, 1995, pp. 29-34

Dreyfus H.L. (1992) 'What Computers Still Can't Do: A Critique of Artificial Reason' MIT Press, 1992

Engelbart D.C. (1962) 'Augmenting Human Intellect: A Conceptual Framework' SRI Summary Report AFOSR-3223, October 1962, at

GDPR (2018) 'General Data Protection Regulation' Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, at

IEEE (2017) 'Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS)' IEEE, Version 2, December 2017, at

Knight W. (2017) 'The Dark Secret at the Heart of AI' 11 April 2017, MIT Technology Review

LaValle S., Lesser E., Shockley R., Hopkins M.S. & Kruschwitz N. (2011) 'Big Data, Analytics and the Path From Insights to Value' Sloan Management Review (Winter 2011Research Feature), 21 December 2010, at

McCarthy J. (2007) 'What is artificial intelligence?' Department of Computer Science, Stanford University, November 2007, at

McCarthy J., Minsky M.L., Rochester N. & Shannon C.E. (1955) 'A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence' Reprinted in AI Magazine 27, 4 (2006), at

Mack E. (2015) 'Bill Gates Says You Should Worry About Artificial Intelligence' Forbes Magazine, 28 January 2015, at

Mann S. (2001) 'Wearable Computing: Toward Humanistic Intelligence' IEEE Intelligent Systems 16, 3 (2001) 10-15, at

Mann S. & Niedzviecki H. (2001) 'Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer' Random House, 2001

Manwaring K. & Clarke R. (2015) 'Surfing the third wave of computing: a framework for research into eObjects' Computer Law & Security Review 31,5 (October 2015) 586-603, PrePrint at

Marcus G. (2018) 'Deep Learning: A Critical Appraisal', arXiv, 2018, at

Markoff J. (2012) 'Scientists See Promise in Deep-Learning Programs' The New York Times, 23 November 2012, at

Mayer-Schonberger V. & Cukier K. (2013) 'Big Data: A Revolution That Will Transform How We Live, Work and Think' John Murray, 2013

Mueller H. & Freytag J.-C. (2003) 'Problems, Methods and Challenges in Comprehensive Data Cleansing' Technical Report HUB-IB-164, Humboldt-Universität zu Berlin, Institut fuer Informatik, 2003, at

Paden B, Cap M., Yong S.Z., Yershov D. & Frazzoli E. (2016) 'A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles' IEEE Transactions on Intelligent Vehicles 1, 1 (2016) at

Robertson L.J., Abbas R., Alici G., Munoz A. & Michael K. (2019) 'Engineering-Based Design Methodology for Embedding Ethics in Autonomous Robots' Proc. IEEE 107, 3 (March 2019) 582-599, at

Russell S.J. & Norvig P. (2009) 'Artificial Intelligence: A Modern Approach' Prentice Hall, 3rd edition, 2009

Scherer M.U. (2016) 'Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies' Harvard Journal of Law & Technology 29, 2 (Spring 2016) 353-400, at

Selbst A.D. & Powles J. (2017) 'Meaningful information and the right to explanation' International Data Privacy Law 7, 4 (November 2017) 233-242, at

Sheridan T.B. & Verplank W.L. (1978) 'Human and Computer Control for Undersea Teleoperators' MIT Press, 1978, at

Simonite T. (2018) 'A plea for AI that serves humanity instead of replacing it' Wired Magazine, 22 June 2018, at

Sulleyman A. (2017) 'Elon Musk: AI is a 'fundamental existential risk for human-civilisation' and creators must slow down' The Independent, 17 July 2017, at

Wachter S. & Mittelstadt B. (2019) 'A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI' Forthcoming, Colum. Bus. L. Rev. (2019), at

Wang R.Y. & Strong D.M. (1996) 'Beyond Accuracy: What Data Quality Means to Data Consumers' Journal of Management Information Systems 12, 4 (Spring, 1996) 5-33

Warwick K. (2014) 'The Cyborg Revolution' Nanoethics 8, 3 (Oct 2014) 263-273

Widom J. (1995) 'Research Problems in Data Warehousing' Proc. 4th Int'l Conf. on Infor. & Knowledge Management, November 1995, at

Wyndham J. (1932) 'The Lost Machine' (originally published in 1932), reprinted in A. Wells (Ed.) 'The Best of John Wyndham' Sphere Books, London, 1973, pp. 13- 36, and in Asimov I., Warrick P.S. & Greenberg M.H. (Eds.) 'Machines That Think' Holt, Rinehart, and Wilson, 1983, pp. 29-49

Zhaohui W. et al. (2016) 'Cyborg Intelligence: Recent Progress and Future Directions' IEEE Intelligent Systems 31, 6 (Nov-Dec 2016) 44-50


This paper has benefited from feedback from multiple colleagues, and particularly Peter Leonard of Data Synergies and Prof. Graham Greenleaf and Kayleen Manwaring of UNSW, and Prof Tom Gedeon of ANU. The comments of an anonymous referee were also helpful in ensuring clarification of key elements of the argument. I first applied the term 'intellectics' during a presentation to launch a Special Issue of the UNSW Law Journal in Sydney in November 2017.

Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University. He has also spent many years on the Board of the Australian Privacy Foundation, and is Company Secretary of the Internet Society of Australia.

xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 11 July 2018 - Last Amended: 24 April 2019 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy