Roger Clarke's Web-Site
© Xamax Consultancy Pty Ltd, 1995-2018
|Identity Matters||Other Topics||Waltzing Matilda||What's New|
Preliminary Draft of 9 September 2018
'AI in the Enterprise - Trust, Privacy, Ethics' Event
Auckland, 28 September 2018
Roger Clarke **
© Xamax Consultancy Pty Ltd, 2018
Available under an AEShareNet licence or a Creative Commons licence.
This document is at http://www.rogerclarke.com/EC/GAI.html
Business and government are looking to use artificial intelligence (AI) techniques not only to formulate recommendations, but also to make decisions and take action. Yet, with most AI, the underlying rationale is at least obscure, and it may not be available.
How can executives satisfy the Board that the business is being managed appropriately if the software provides no explanations? Beyond operational management, there are compliance risks to manage, and threats to important relationships with customers, staff, suppliers and the public.
Ill-advised uses of AI need to be identified in advance and nipped in the bud, to avoid harm to important corporate and social values. Organisations need to extract the achievable benefits from advanced technologies rather than dreaming dangerous dreams.
This presentation considers what shape guidance needs to take in relation to the deployment of AI, and how the Do's and Don'ts can be operationalised and embedded in corporate culture and business processes.
The term Artificial Intelligence (AI) was coined in 1955 for the Dartmouth Summer Research Project in Automata (McCarthy et al. 1955). The proposal was based on "the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it".
The wild over-enthusiasm that characterises the promotion of AI has deep roots. Simon (1960) averred that "Within the very near future - much less than twenty-five years - we shall have the technical capability of substituting machines for any and all human functions in organisations. ... Duplicating the problem-solving and information-handling capabilities of the brain is not far off; it would be surprising if it were not accomplished within the next decade". Over 35 years later, with his predictions abundantly demonstrated as being fanciful, Simon nonetheless maintained his position, e.g. "the hypothesis is that a physical symbol system [of a particular kind] has the necessary and sufficient means for general intelligent action" (Simon 1996, p. 23 - but expressed in similar terms from the late 1950s, in 1969, and through the 1970s), and "Human beings, viewed as behaving systems, are quite simple" (p. 53). Simon acknowledged "the ambiguity and conflict of goals in societal planning" (p. 140), but his subsequent analysis of complexity (pp. 169-216) considered only a very limited sub-set of the relevant dimensions. Much the same assertions can be found in, for example, Kurzweil (2005): "by the end of the 2020s" computers will have "intelligence indistinguishable to biological humans" (p.25), and in shameless self-promotional documents of the current decade.
AI has offered a long litany of promises, many of which have been repeated multiple times, on a cyclical basis. Each time, proponents have spoken and written excitedly about prospective technologies, using descriptions that not merely verged into the mystical, but often crossed the border into the realms of magic and alchemy. Given the habituated exaggerations that proponents indulge in, it is unsurprising that the field has exhibited cyclical 'boom and bust' patterns, with research funding being very easy or very difficult to obtain, depending on the extent to which the hyperbole or the very low delivery-rate against promises was currently in focus.
One of AI's many problems has been that each of the actual successes deriving from what began as AI research has shed the name, and become associated with some other term. For example, pattern recognition, in particular within two-dimensional imagery, has made a great deal of progress, and achieved application in multiple fields, at an early stage with optical character recognition and later with more complex patterns. Expert systems approaches, particularly based on rule-sets, have also achieved some degree of success. [To what extent did real-time trim and stability in air and water arise from control theory and to what extent can it reasonably be identified with AI?]. Game-playing, particularly of chess and go, have provided entertainment value and spin-offs, but have not provided the breakthroughs towards posthumanism that their proponents appeared to be claiming for them.
The successes share some common features. They may be complex, but they are understandable by people with appropriate technical background, i.e. they are not magic, and applications of the technology are auditable. They have been able to be empirically tested in real-world contexts, but under sufficiently controlled conditions that the risks have been able to be managed.
The original conception of AI is not servicing humankind very well at all. If real progress is to be made, the longstanding calls for reconsideration of the idea need to be finally heeded. What could, and what should 'Artificial' 'Intelligence' (AI) mean? Might an alternative term better describe what humankind needs? How can the more promising paths be distinguished from the merely experimental and the outright dead-end ideas?
The sense in which the term 'intelligence' is used by the AI community is that an entity exhibits intelligence if it has perception and cognition of (relevant aspects of) its environment, has goals, and takes actions towards the achievement of those goals. Some AI proponents strive to replicate in artefacts the processes whereby human entities exhibit intelligence, whereas others are concerned with the artefact's performance rather than the means whereby the performance arises.
The term 'artificial' has always been problematic. To a considerable extent, it means 'synthetic', i.e. human-made, but equivalent to human. It is far from clear that there is a need for yet more human intelligence when there are already over 7 billion of us, many under-employed. To some extent, 'artificial' is used to mean both 'synthetic' and 'superior-to-human'. This raises the question as to how superiority is to be measured (and even whether - by definition, inadequate - human intelligence can define what superior means).
An alternative approach appears to offer far greater prospects. The idea is traceable at least to Wyndham (1932): "Surely man and machine are natural complements: They assist one another". I argued in Clarke (1989) that there was a need to "deflect the focus ... toward the concepts of 'complementary intelligence' and 'silicon workmates' ... to complement human strengths and weaknesses, rather than to compete with them". Again, in Clarke (1993-94), reprised in Clarke (2014), I reasoned that: "Because robot and human capabilities differ, for the foreseeable future at least, each will have specific comparative advantages. Information technologists must delineate the relationship between robots and people by applying the concept of decision structuredness to blend computer-based and human elements advantageously".
To achieve Complementary Intelligence, we would design artefacts that:
Some examples of 'complementary intelligence' are:
This section identifies two broad areas in which AI activities have been highly active in recent years. Considerable progress has been made in both, and in some of their sub-areas. On the other hand, the challenges are extraordinary large, and progress in both areas is encountering serious difficulties, public scepticism and resistance, and enormous risks exist that a significant proportion of the benefits that are potentially extractable from these technologies may be lost because of overclaim, over-reach, and collapse.
Robotics emerged as machines enhanced with computational capacity, and have enjoyed their major areas of success in the controlled environments of the factory floor and warehouse.
real-time equilibration of the trim and stability of craft suspended on or in fluids, both water (e.g. floating oil-rigs and underwater vehicles) and air (e.g. manned aircraft and drones)
self-driving passenger vehicles, variously on rails, in controlled environments such as mines and quarries and dedicated bus routers, and in open environments
This brings with it the characteristic of autonomy.
The term cyborgisation refers to the process of enhancing individual humans by technological means (Clarke 2005), such that a cyborg is a hybrid of a human and one or more artefacts. To qualify as AI, the enhancement needs to involve some computational or software-based component, quite possibly together with one or more actuators. For example, a walking-stick or an inert hip-replacement does not involve AI; whereas AI is involved in some replacement legs for above-knee amputees, in the form of an artificial knee that contains software to sustain balance within the joint.
It is useful to distinguish between prosthetics to replace lost functionality, and orthotics to provide augmented or additional functionality (Clarke 2011). It was argued in Clarke (2014) that use by drone pilots of instrument-based remote control, and particularly of first-person view (FPV) headsets, represent a form of orthotic cyborgisation.
The emphasis has been shifting in recent years. The conception is now usefully inverted, and the field regarded as computing capabilities enhanced with actuators, enabling computational processes to act directly on the world. The term 'intellectics' is a useful means of encapsulating that switch in emphasis. (The term has been previously used by Wolfgang Bibel, in a similar manner, originally in German c. 1992).
people-scoring, most prominently in financial credit, in social welfare, and in 'social credit' - although to date only in the PRC (Chen & Cheung 2017).
automation of administrative decisions and actions
'post-algorithmic' / 'empirical'
The major area of AI in which developments have been occurring in recent years is in machine learning (AI/ML), whose dominant form is neural networking approaches. This adopts a fundamentally different approach to the development of computing functionality compared with the previous generations of software development technologies. Pilots, and in some cases live applications, apply neural networks to inferencing, to decision-making, and to the performance of actions in and on the real world
The technologies have also been applied to prediction, with often outlandish assumptions made about the technique's effectiveness, resulting in something resembling pre-destination, i.e. allocation of behaviour and even of individual people to categories, frequently of a criminal nature.
A critical feature of AI/ML gives rise to challenges that have seldom previously been confronted by humankind.
training-set ==>> weightings
empirical, no explanations
This brings with it the removal of transparency.
>Transparency is in any case much more challenging in the contemporary context than it was in the past. During the early decades of software development, until c.1990, the rationale underlying any particular inference was apparent from the independently-specified algorithm or procedure implemented in the software. Subsequently, so-called expert systems adopted an approach whereby the problem-domain is described, but the problem and solution, and hence the rationale for an inference, are much more difficult to access. Recently, purely empirical techniques such as neural nets and the various approaches to machine learning have attracted a lot of attention. These do not even embody a description of a problem domain. They merely comprise a quantitative summary of some set of instances (Clarke 1991). In such circumstances, no humanly-understandable rationale for an inference exists, transparency is non-existent, and accountability is impossible (Burrell 2016, Knight 2017). To cater for such problems, Broeders et al. (2017), writing in the context of national security applications, called for the imposition of a legal duty of care and requirements for external reviews, and the banning of automated decision-making.
>Neural networks, for example, depend firstly on some, often simplistic modelling, and secondly on a heap of empirical data being shovelled into the inevitably inadequate model. The heap may be conveniently available or carefully selected, and it may be small or large, and it may be representative of some population or otherwise. There is no sense of a rationale for the inferences that are drawn from software developed in such ways. The various other forms of technique that have emerged from the artificial intelligence (AI), machine learning (ML) and data analytics / data science communities in many cases also lack any meaningful form of transparency of inferencing (Knight 2017, Clarke 2018a).
>These movements embody the abandonment of systemic reasoning, and the championing of empiricism over theory. Empirical techniques are being touted as the new, efficient way to apply computing to the identification of hitherto-missed opportunities. Moreover, the techniques are escaping from research laboratories and being applied within operational systems. This results in a-rational decision-making, i.e. actions by organisations that have escaped the constraint of having to be logically justified. The loss of the decision transparency that existed when applications were developed using earlier generations of software undermines organisations' accountability for their decisions and actions. In the absence of transparency, serious doubts arise about the survival of principles such as evaluation, fairness, proportionality, evidence-based decision-making, and the right to challenge decisions (APF 2013).
>Warnings about the risks arising from such naive applications of computing power have been sounded many times (Weizenbaum 1976, Dreyfus 1992, Aggarwal 2018). It is vital that personal ethical responsibility be taken for harm arising from negligently-delegated decisions (Wigan 2015), and that legal obligations and sanctions be imposed on organisations and individuals.
The notion of accountability has long been central to the operation of justice. An entity takes both moral and legal responsibility for impacts and implications arising from its actions and its inaction. It incurs liabilities, and bears risks, and is answerable to the courts for them. It offends the principle of accountability when responsibility for harm cannot be sheeted home to any entity, as occurs when,for example, the perpetrator of a crime cannot be identified, or the party with the liability is bankrupt or insolvent.
If artefacts have a degree of autonomy, accountability can only be sustained if one or more entities incur criminal and civil liabilities for harm arising from the autonomous behaviour. The absence of transparency also destroys accountability, because the lack of clarity about the basis on which decisions were made and actions taken supports plausible deniability, and hence enables parties to contest liability.
To the extent that robotic autonomy or intellectic inexplicability becomes accepted, this would undermine humanness. The much more likely scenario is that society will be revulsed by the idea, and abandon the idea.
Because of AI's potentials for harm, it is important for organisations to identify restraints on their freedom of choice and actions. The following section examines the regulatory landscape as it applies to AI. This section first considers the extent to which ethics affects organisational applications of technology.
Ethics is a branch of philosophy concerned with concepts of right and wrong conduct. Fieser (1995) and Pagallo (2016) distinguish 'meta-ethics', which is concerned with the language, origins, justifications and sources of ethics, from 'normative ethics', which formulates generic norms or standards, and 'applied ethics', which endeavours to operationalise norms in particular contexts. In a recent paper, Floridi (2018) has referred to 'hard ethics' - that which "may contribute to making or shaping the law" - and 'soft ethics' - which are discussed after the fact.
From the viewpoint of instrumentalists in business and government, the field of ethics suffers two enormous drawbacks. The first is that, as a form of philosophical endeavour, it embodies every complexity and contradiction that smart people can dream up, there is no authority for any particular formulation of norms, and hence every proposition is subject to debate. The second is that few formulations of philosophers ever reach even close to operational guidance, and hence the sources enable prevarication and provide endless excuses for inaction. The inevitable result is that ethical discussions seldom have much real-world influence on real-world behaviour. Ethics is an intellectually stimulating topic for the dinner-table, and graces ex post facto reviews of disasters. To an instrumentalist - who wants to get things done - ethics diversions are worse than a time-waster; they're a barrier to progress.
The occasional fashion of 'business ethics' naturally inherits the vagueness of ethics generally, and provides little or no concrete guidance to organisations in any of the many areas in which ethical issues are thought to arise. Far less does 'business ethics' assist in relation to complex and opaque digital technologies. However, Clarke (2018a) offers a small collection of attempts to consolidate general ethical principles that may have applicability in technology-rich contexts contexts - including bio-medicine, surveillance and information technology. Frequently-encountered norms include benefit, justification of disbenefit, mitigation of disbenefit, proportionality and recourse.
The related notion of Corporate Social Responsibility (CSR), sometimes extended to include an Environmental aspect, can be argued to have an ethical base, but in practice has as its primary focus the extraction of public relations gains from organisations' required investments in regulatory compliance. CSR can, however, extend beyond the direct interests of the organisation to include philanthropic contributions to individuals, community, society or the environment.
However, it is important to appreciate the constraints on company directors, who are required by law to act in the best interests of the company. Attention to broad ethical questions is generally extraneous to, and even in conflict with, that requirement, except where a business case indicates sufficient benefits to the organisation from taking a socially or environmentally responsible approach. The primary ways in which benefits can accrue are through compliance with regulatory requirements, and enhanced relationships with important stakeholders. Most commonly, these will be customers, suppliers and employees, but the scope might extend to communities and economies on which the company has a degree of dependence.
Given the limited framework provided by ethics, the question arises as to the extent to which organisations that research, create and apply AI are subject to legal constraints.
AI has been argued by its proponents to be arriving imminently, roughly every 5 years since 1956. Despite that, regulatory requirements that have been designed or modified specifically with AI in mind, are difficult to find. One reason for this is that parliaments seldom act in advance of new technologies being deployed. A 'precautionary principle' has been enunciated, whose strong form exists in some jurisdictions' environmental laws, along the lines of 'When human activities may lead to morally unacceptable harm that is scientifically plausible but uncertain, actions shall be taken to avoid or diminish that potential harm' (TvH 2006). More generally, however, the 'principle' is merely an ethical norm to the effect that 'If an action or policy is suspected of causing harm, and scientific consensus that it is not harmful is lacking, then the burden of proof arguably falls on those taking the action'. If AI is, or shortly will be, as impactful as its proponents argue, then there's a very strong argument that the precautionary principle must be applied, and as law, not as a mere 'norm'.
This section identifies a range of sources that may offer organisations, if not actual guidance, at least some insights into what society might come to expect, and what obligations organisations might need to recognise. The following categories of obligations may be relevant (Clarke & Greenleaf 2018):
In-place industrial robotics, in production-lines and warehouses, are well-established, and some degree of regulation exists, at least in relation to worker safety and employer liability.
In relation to public-space robotics, driverless vehicles, drones and/or AI, on the other hand, it does not appear that any country has a legal framework in place. Multiple countries and sub-national jurisdictions have declared policies. Common features of them have been a focus on economic motivations, stimulation and facilitation of innovation, and an absence of any form even of guidance, let alone of regulation.
In HTR (2017), South Korea is identified as having enacted the first national law relating specifically to robotics: the Intelligent Robots Development Distribution Promotion Act of 2008. It is almost entirely facilitative and stimulative, however, and barely even aspirational in relation to regulation of robotics. There is mention of a 'Charter', "including the provisions prescribed by Presidential Decrees, such as ethics by which the developers, manufacturers, and users of intelligent robots shall abide" - but no such Charter appears to exist. A mock-up is at Akiko (2012). HTR (2018c) offers a generic regulatory specification in relation to research and technology generally, including robotics and AI.
Automated decision-making about people - a longstanding element of French data protection law, and since 2018 of European law generally, through the General Data Protection Regulation (GDPR) Art. 22; but see Wachter et al. (2017), which doubts whether it is effective, particularly in view of the absence of a requirement that humanly-understandable explanations of decisions be provided and hence transparency is a desirable norm rather than a legal obligation.
Discussion of AI-specific laws, see Palmerini et al. (2014).
See HTR (2018a, 2018b).
Applications of new technologies are generally subject to existing laws. Particularly with 'breakthrough', revolutionary and disruptive technologies, existing laws are likely to be ill-fitted to the new context, because they were conceived, articulated and applied without knowledge of the new form. In some cases, existing law may hinder new technologies in ways that are unhelpful not only to innovators but even to those affected by them. In other cases, existing law may have been framed in such a manner that it does not encompass the new form, even though there would have been benefits if it had done so.
Applications of AI will generally be subject to the various forms of commercial law, including contractual obligations including conditions and imputed terms, consumer rights laws where these exist, and copyright and patent laws. In some contexts (such as AI software embedded in a device), product liability laws may apply. Laws that assign risk to innovators may also apply, such as the tort of negligence. The obligations that the corporations laws assigns to company directors is also relevant. Further sources of regulatory impact are likely to be the laws relating to the various industry sectors within which AI is applied, such as road transport law, workplace and employment law, health law, and data protection law.
Particularly in common law jurisdictions, there is likely to be a great deal of uncertainty about the way in which laws will be applied by the courts in the event that a dispute reaches them. This acts to some extent as a deterrent against innovation, and can considerably increase the costs incurred by proponents, and delay deployment. From the viewpoint of opponents, on the other hand, addressing the real and perceived threats embodied within the new technology may appear challenging, expensive and slow.
Parliaments struggle to understand and cope with new technologies. An approach to regulation that offers promise is co-regulation. Under this arrangement, the parliament establishes a legal framework, including authority, obligations, sanctions and enforcement mechanisms, but without expressing the obligations at a detailed level. This is achieved through consultative processes among advocates for the various stakeholders. The result is an enforceable Code, which articulates general principles expressed in the relevant legislation.
Unfortunately, few instances of effective co-regulation exist, and there are few signs of parliaments being aware of the opportunity and its applicability to robotics and AI.
It is common for parliaments to designate a specialist government agency or parliamentary appointee to exercise loose oversight over a contested set of activities, or to enforce laws and Codes that regulate them. Not infrequently, enabling legislation in relation to new technologies creates a new agency or appointee rather than expanding the scope of an existing one. In very few instances does AI lie within the scope of an existing agency or appointee. Some exceptions may exist, for example in relation to the public safety aspects of drones and self-driving motor vehicles.
Six decades after the AI er awas launched, the EU has gone no further than a preliminary statement (EC 2018) and a discussion document of the Data Protection Supervisor (EDPS 2016). The UK Data Protection Commissioner likewise has only reached the discussion paper stage (ICO 2017).
Corporations club together for various reasons, some of which can be to the detriment of other parties, such as collusion on bidding and pricing. Other motivations, however, can offer potential benefits for others, as well as themselves. Collaborative approaches to infrastructure can improve services and reduce costs for the sector's customers. Misbehaviour by an industry's 'cowboys' can be highlighted by being clearly inconsistent with norms promulgated by the more responsible corporations in the sector.
In practice, however, the effect is seldom all that great. Few Industry Codes are sufficiently stringent to protect the interests of other parties, and the absence of enforcement undermines the endeavour. As a result, the primary role of such Codes is as camouflage, creating a mirage of safeguards and hence holding off actual regulatory measures.
In the AI field, examples of industry coalitions eagerly pre-countering the threat of regulation include FLI (2017), ITIC (2017), and PoAI (2018).
A more valuable role is played by industry standards. HTR (2017) catalogues industry standards issued by ISO in the AI arena. However, industry standards most commonly focus on inter-operability and physical safety, and some on business processes to achieve quality product. Hence only a small proportion of the threats embodied in AI are able to be avoided, mitigated and managed through industry standards.
A role can also be played by professional associations, because these generally balance public needs against self-interest somewhat better than industry associations. Their impact is, however, far less pronounced than that of industry associations. Moreover, the intiatives to date of the two largest bodies are underwhelming, with ACM (2017) using weak forms such as "should" and "are encouraged to", and IEEE (2017) offering lengthy prose but vague and highly-qualified 'principles'.
It was noted above that Directors of corporations are required by law to pursue the interests of the corporation ahead of all others. It is therefore unsurprising, and even to be expected, that organisational self-regulation is almost always ineffectual from the viewpoint of the supposed beneficiaries, and often not even effective at protecting the organisation itself from bad publicity. Recent offerings by major corporations include IBM (Rayome 2017), Google (Pichai 2018) and MS (2018). For an indication of the scepticism with which such documents are met, see Newcomer (2018).
Media articles have reported on a wide range of, in most cases fairly vague 'principles', proposed by a diverse array of organisations, including:
Although there are commonalities among these formulations, there is also a lot of diversity, and few of them offer usable advice on how to ensure that applications of AI are done in a responsible manner. The following section draws on the sources identified above, in order to offer practical advise. It places the ideas within a conventional framework, but extends that framework in order to address the needs of all stakeholders rather than just the corporation itself.
Ethical analyses offer little assistance, and a regulatory framework is lacking. It might seem attractive to business enterprises to face few legal obligations and limited compliance risk exposure. On the other hand, many other risks are heightened, because 'cowboy' behaviour is inevitable, at least by some competitors, but also by individuals and groups within each organisation that can be tempted by the promise that AI is supposed to offer. The reputational risk faced by organisations is high. As a result, it is in organisations own self-interest for a modicum of regulation to exist, in order to provide a protective shield against media exposés and public backlash.
This section offers guidance to organisations. It does this by adopting a familiar and practical approach to assessing and managing risks. However, it extends the conventional framework by applying an important message that can be found in ethical analysis, and that is commonly lacking in business approaches to risk. That missing ingredient is stakeholder analysis. Risk assessment and management needs to be performed not only from the business perspective, but also from the perspectives of other stakeholders.
There are many sources of guidance in relation to risk assessment and management. This is most familiar in the context of security of IT assets and digital data. The language and the approaches vary among the many sources (most usefully: Firesmith 2004, ISO 2005, ISO 2008, NIST 2012, ENISA 2016, ISM 2017).
For the present purpose, a model is adopted that is summarised in Appendix 1 of Clarke (2015). See Figure 1.
Existing corporate practice approaches this model from the perspective of the organisation itself. Relevant assets are identified, and an analysis undertaken of the various forms of harm that could arise to those assets as a result of threats impinging on vulnerabilities and giving rise to incidents. Existing safeguards are taken into account, in order to guide the development of a strategy and plan to refine and extend the safeguards in order to provide a degree of protection that is reasonable in the circumstances.
The notion of stakeholders was introduced in Freeman & Reed (1983) as a means of juxtaposing the interests of other parties against those of the corporation's shareholders. Many stakeholders are participants in the process or intervention, in such roles as employees, customers and suppliers. Where the organisation's computing services extend beyond its boundaries, all of those primary categories of stakeholder may be users of the organisation's information systems.
However, the categories of stakeholders are broader than this (Pouloudi & Whitley 1997, p.3), comprising not only "participants in the information systems development process" but also "any other individuals, groups or organizations whose actions can influence or be influenced by the development and use of the system whether directly or indirectly". The term 'usees' is a usefully descriptive term for these once-removed stakeholders (Clarke 1992, Fischer-Hübner & Lindskog 2001, Baumer 2015).
My first proposition for extension beyond conventional corporate risk assessment is that the responsible application of AI is only possible if stakeholder analysis is undertaken in order to identify the categories of entities that are or may be affected by the project. There is a natural tendency to focus only on those entities that have sufficient market or institutional power to significantly affect the success of the project. In a world of social media and rapid and deep mood-swings, it is advisable to not overlook the nominally less powerful stakeholders. Where large numbers of individuals are involved (typically, employees, consumers and the general public), it will generally be practical to use representative and advocacy organisations as intermediaries, to speak on behalf of the categories or segments of individuals.
My second proposition then is that the responsible application of AI depends on risk assessment processes being conducted from the perspectives of stakeholders, to complement that undertaken from the perspective of the corporation.
The results of the two or more risk assessment processes outlined above deliver the information needed to develop a strategy and plan whereby existing safeguards can be adapted or replaced, and new safeguards conceived and implemented. ISO standard 27005 (2008, pp.20-24) discusses four options for what it refers to as 'risk treatment': risk modification, risk retention, risk avoidance and risk sharing. A more detailed pragmatic list is offered in Table 1.
Existing documents and techniques are strongly oriented towards protection against risks as perceived by the organisation. Risks to other stakeholders are commonly treated as, at best, a second-order consideration, and at worst as if they were out-of-scope. All risk management work involves the exercise of a considerable amount of imagination. That characteristic needs to be underlined even more strongly in the case of the comprehensive, multi-stakeholder approach that I am proposing here.
This section suggests some guidance for organisations seeking a way to investigate and implement AI in a responsible manner. It draws on available materials, cited in this article, and extracted into resource-pages on 'ethical principles and IT', at Clarke (2018a) and on 'principles for AI', at Clarke (2018b).
MUST TIE TOGETHER THE THREADS, incl. 'complementary intelligence'; AI as robotics, cyborgisation and intellectics; autonomy; automated decision-making; transparency and accountability; ethics; risk assessment; risk management.
ACM (2017) 'Statement on Algorithmic Transparency and Accountability' Association for Computing Machinery, January 2017, at https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf
Akiko (2012) 'South Korean Robot Ethics Charter 2012' Akiko's Blog, 2012, at https://akikok012um1.wordpress.com/south-korean-robot-ethics-charter-2012/
Baumer E.P.S. (2015) 'Usees' Proc. 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI'15), April 2015
Calo R. (2017) 'Artificial Intelligence Policy: A Primer and Roadmap' UC Davis L. Rev. 51 (2017) 399-404
Chen Y. & Cheung A.S.Y. (2017). 'The Transparent Self Under Big Data Profiling: Privacy and Chinese Legislation on the Social Credit System, The Journal of Comparative Law 12, 2 (June 2017) 356-378, at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2992537
Clarke R. (1989) 'Knowledge-Based Expert Systems: Risk Factors and Potentially Profitable Application Area', Xamax Consultancy Pty Ltd, January 1989, at http://www.rogerclarke.com/SOS/KBTE.html
Clarke R. (1992) 'Extra-Organisational Systems: A Challenge to the Software Engineering Paradigm' Proc. IFIP World Congress, Madrid, September 1992, at http://www.rogerclarke.com/SOS/PaperExtraOrgSys.html
Clarke R. (1993) 'Asimov's Laws of Robotics: Implications for Information Technology' in two parts, in IEEE Computer 26,12 (December 1993) 53-61, and 27,1 (January 1994) 57-66, at http://www.rogerclarke.com/SOS/Asimov.html
Clarke R. (2005) 'Human-Artefact Hybridisation: Forms and Consequences' Proc. Ars Electronica 2005 Symposium on Hybrid - Living in Paradox, Linz, Austria, 2-3 September 2005, PrePrint at http://www.rogerclarke.com/SOS/HAH0505.html
Clarke R. (2011) 'Cyborg Rights' IEEE Technology and Society 30, 3 (Fall 2011) 49-57, at http://www.rogerclarke.com/SOS/CyRts-1102.html
Clarke R. (2014) 'What Drones Inherit from Their Ancestors' Computer Law & Security Review 30, 3 (June 2014) 247-262, PerPrint at http://www.rogerclarke.com/SOS/Drones-I.html
Clarke R. (2015) 'The Prospects of Easier Security for SMEs and Consumers' Computer Law & Security Review 31, 4 (August 2015) 538-552, PrePrint at http://www.rogerclarke.com/EC/SSACS.html
Clarke R. (2018a) 'Ethical Principles and Information Technology' Xamax Consultancy Pty Ltd, rev. September 2018, at http://www.rogerclarke.com/EC/GAIE.html
Clarke R. (2018b) 'Principles for AI: A 2017-18 SourceBook' Xamax Consultancy Pty Ltd, rev. September 2018, at http://www.rogerclarke.com/EC/GAI.html
Clarke R. & Greenleaf G.W. (2018) 'Dataveillance Regulation: A Research Framework' Journal of Law and Information Science 25, 1 (2018), at http://www.rogerclarke.com/DV/DVR.html
EC (2018) 'Statement on Artificial Intelligence, Robotics and `Autonomous' Systems' European Group on Ethics in Science and New Technologies' European Commission, March 2018, at http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf
EDPS (2016) 'Artificial Intelligence, Robotics, Privacy and Data Protection' European Data Protection Supervisor, October 2016, at https://edps.europa.eu/sites/edp/files/publication/16-10-19_marrakesh_ai_paper_en.pdf
ENISA (2016) 'Risk Management:Implementation principles and Inventories for Risk Management/Risk Assessment methods and tools' European Union Agency for Network and Information Security, June 2016, at https://www.enisa.europa.eu/publications/risk-management-principles-and-inventories-for-risk-management-risk-assessment-methods-and-tools
Fieser J. (1995) 'Ethics' Internet Encyclopaedia of Philosophy, 1995, at https://www.iep.utm.edu/ethics/
Firesmith D. (2004) 'Specifying Reusable Security Requirements' Journal of Object Technology 3, 1 (Jan-Feb 2004) 61-75, at http://www.jot.fm/issues/issue_2004_01/column6
Fischer-Hübner S. & Lindskog H. (2001) 'Teaching Privacy-Enhancing Technologies' Proc. IFIP WG 11.8 2nd World Conference on Information Security Education, Perth, 2001, at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.24.3950&rep=rep1&type=pdf
FLI (2017) 'Asilomar AI Principles' Future of Life Institute, January 2017, at https://futureoflife.org/ai-principles/?cn-reloaded=1
Floridi L. (2018) 'Soft Ethics: Its Application to the General Data Protection Regulation and Its Dual Advantage' Philosophy & Technology 31, 2 (June 2018) 163-167, at https://link.springer.com/article/10.1007/s13347-018-0315-5
Freeman R.E. & Reed D.L. (1983) 'Stockholders and Stakeholders: A New Perspective on Corporate Governance' California Management Review 25:, 3 (1983) 88-106, at https://www.researchgate.net/profile/R_Freeman/publication/238325277_Stockholders_and_Stakeholders_A_New_Perspective_on_Corporate_Governance/links/5893a4b2a6fdcc45530c2ee7/Stockholders-and-Stakeholders-A-New-Perspective-on-Corporate-Governance.pdf
GEFA (2016) 'Position on Robotics and AI' The Greens / European Free Alliance Digital Working Group, November 2016, at https://juliareda.eu/wp-content/uploads/2017/02/Green-Digital-Working-Group-Position-on-Robotics-and-Artificial-Intelligence-2016-11-22.pdf
HOL (2018) 'AI in the UK: ready, willing and able?' Select Committee on Artificial Intelligence, House of Lords, April 2018, at https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf
HTR (2017) 'Robots: no regulatory race against the machine yet' The Regulatory Institute, April 2017, at http://www.howtoregulate.org/robots-regulators-active/#more-230
HTR (2018a) 'Report on Artificial Intelligence: Part I - the existing regulatory landscape' The Regulatory Institute, May 2018, at http://www.howtoregulate.org/artificial_intelligence/
HTR (2018b) 'Report on Artificial Intelligence: Part II - outline of future regulation of AI' The Regulatory Institute, June 2018, at http://www.howtoregulate.org/aipart2/#more-327
HTR (2018c) 'Research and Technology Risks: Part IV - A Prototype Regulation' The Regulatory Institute, March 2018, at http://www.howtoregulate.org/prototype-regulation-research-technology/#more-298
ICO (2017) 'Big data, artificial intelligence, machine learning and data protection' UK Information Commissioner's Office, Discussion Paper v.2.2, September 2017, at https://ico.org.uk/for-organisations/guide-to-data-protection/big-data/
IEEE (2017) 'Ethically Aligned Design', Version 2. IEEE, December 2017. at http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html
ISM (2017) 'Information Security Manual' Australian Signals Directorate, November 2017, at https://acsc.gov.au/infosec/ism/index.htm
ISO (2005) 'Information Technology - Code of practice for information security management', International Standards Organisation, ISO/IEC 27002:2005
ISO (2008) 'Information Technology - Security Techniques - Information Security Risk Management' ISO/IEC 27005:2008
ITIC (2017) 'AI Policy Principles' Information Technology Industry Council, undated but apparently of October 2017, at https://www.itic.org/resources/AI-Policy-Principles-FullReport2.pdf
McCarthy J., Minsky M.L., Rochester N. & Shannon C.E. (1955) 'A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence' Reprinted in AI Magazine 27, 4 (2006), at https://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1904/1802
MS (2018) 'Microsoft AI principles' Microsoft, August 2018, at https://www.microsoft.com/en-us/ai/our-approach-to-ai
Newcomer E. (2018). 'What Google's AI Principles Left Out: We're in a golden age for hollow corporate statements sold as high-minded ethical treatises' Bloomberg, 8 June 2018, at https://www.bloomberg.com/news/articles/2018-06-08/what-google-s-ai-principles-left-out
NIST (2012) 'Guide for Conducting Risk Assessments' National Institute of Standards and Technology, Special Publication SP 800-30 Rev. 1, September 2012, at http://csrc.nist.gov/publications/nistpubs/800-30-rev1/sp800_30_r1.pdf
Pagallo U. (2016). 'Even Angels Need the Rules: AI, Roboethics, and the Law' Proc. ECAI 2016
Palmerini E. et al. (2014). 'Guidelines on Regulating Robotics Delivery' EU Robolaw Project, September 2014, at http://www.robolaw.eu/RoboLaw_files/documents/robolaw_d6.2_guidelinesregulatingrobotics_20140922.pdf
Pichai S. (2018) 'AI at Google: our principles' Google Blog, 7 Jun 2018, at https://www.blog.google/technology/ai/ai-principles/
PoAI (2018) 'Our Work (Thematic Pillars)' Partnership on AI, April 2018, at https://www.partnershiponai.org/about/#pillar-1
Pouloudi A. & Whitley E.A. (1997) 'Stakeholder Identification in Inter-Organizational Systems: Gaining Insights for Drug Use Management Systems' European Journal of Information Systems 6, 1 (1997) 1-14, at http://eprints.lse.ac.uk/27187/1/__lse.ac.uk_storage_LIBRARY_Secondary_libfile_shared_repository_Content_Whitley_Stakeholder%20identification_Whitley_Stakeholder%20identification_2015.pdf
Rayome A.D. (2017) 'Guiding principles for ethical AI, from IBM CEO Ginni Rometty', TechRepublic (17 January 2017), at https://www.techrepublic.com/article/3-guiding-principles-for- ethical-ai-from-ibm-ceo-ginni-rometty/
Smith R. (2018). '5 core principles to keep AI ethical'. World Economic Forum, 19 Apr 2018, at https://www.weforum.org/agenda/2018/04/keep-calm-and-make-ai-ethical/
TvH (2006) 'Telstra Corporation Limited v Hornsby Shire Council' NSWLEC 133 (24 March 2006), esp. paras. 113-183, at http://www.austlii.edu.au/au/cases/nsw/NSWLEC/2006/133.htm
UGU (2017) 'Top 10 Principles for Ethical AI' UNI Global Union, December 2017, at http://www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf
Villani C. (2017) 'For a Meaningful Artificial Intelligence: Towards a French and European Strategy' Part 5 - What are the Ethics of AI?, Mission for the French Prime Minister , March 2018, pp.113-130, at https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf
Wachter S., Mittelstadt B. & Floridi L. (2017) 'Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation' International Data Privacy Law 7, 2 (May 2017) 76-99, at https://academic.oup.com/idpl/article/7/2/76/3860948
Wyndham J. (1932) 'The Lost Machine' (originally published in 1932), reprinted in A. Wells (Ed.) 'The Best of John Wyndham' Sphere Books, London, 1973, pp. 13- 36, and in Asimov I., Warrick P.S. & Greenberg M.H. (Eds.) 'Machines That Think' Holt, Rinehart, and Wilson, 1983, pp. 29-49
Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University. He has also spent many years on the Board of the Australian Privacy Foundation, and is Company Secretary of the Internet Society of Australia.
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.
From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 50 million in early 2015.
Sponsored by Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916
Created: 11 July 2018 - Last Amended: 9 September 2018 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/GAI.html