Roger Clarke's Web-Site
© Xamax Consultancy Pty Ltd, 1995-2022
|Identity Matters||Other Topics||Waltzing Matilda||What's New|
Version of 15 January 2020
Roger Clarke **
© Xamax Consultancy Pty Ltd, 2019-20
Available under an AEShareNet licence or a Creative Commons licence.
This is a short piece intended for professionals, and supersedes versions of 18 August 2019, 20 February 2019 and 14 October 2018
This document is at http://www.rogerclarke.com/EC/RAIC.html
Considerable public disquiet exists about Artificial Intelligence. Some concerns are ill-informed, some speculative, and some well-grounded. Public opposition could result not only in justified constraints on AI but also in kneejerk regulatory measures that create unnecessary obstacles to desirable forms and applications of AI. A set of 50 Principles for Responsible AI has been consolidated from a suite of proposals published by highly diverse organisations. The 50 Principles offer guidance for professionals in all phases of the AI supply chain.
Artificial Intelligence (AI) excites a lot of people. And hyperbole about AI upsets a lot of other people. Engineers and computer scientists are busy sifting through the chaff to find the wheat. In doing so, they have a trio of factors in mind: the benefits to be gained, the harm to be avoided, and the control to be exercised. However, with any powerful and complex technology, there are many subtleties and conflicts to contend with.
Benefits have to be discounted by investment costs and ongoing operational costs. Further, a meaningful cost/benefit analysis has to go beyond the financial factors to encompass impacts on corporate strategy and reputation. This means that consideration has to be given to possible negative reactions by business partners: a win for the organisation that sponsors a technology application may not represent a win-win for all parties.
When assessing whether an application of AI might cause harm, the scope needs to be expanded beyond business partners to all other stakeholders as well. The evaluation also needs to encompass both the inevitable and predictable negative impacts on stakeholders' interests and contingent harms that may or may not arise. Two rules of thumb that are often invoked are 'first, do no harm', and 'pareto optimality' - which is achieved by a design if no adjustment to it can make any player better off without making another player worse off. Such heuristics seem more like wishful thinking than useful guidance, however, given that technology that delivers big benefits necessarily inflicts at least some harm on at least some of the players.
The third factor, the exercise of control over the behaviour of AI, depends on the measurement of inputs and outputs, on ways of assessing whether the process is performing appropriately, and on having means available to adapt the process to avoid undesirable outcomes. The effectiveness of controls needs to be monitored by higher-order controllers, and hence depends on transparency, trackability and auditability.
Because AI innovations are so often shrouded in mystery, coverage in mainstream and social media evidences a great many misunderstandings. Justifiable public concern exists, however, about five aspects of AI (Clarke 2019a). One is that artefacts embodying AI are being granted high levels of autonomy to act in the real world and to directly affect human beings. This undermines the possibility of exercising control. A second issue is inappropriate assumptions about data used by AI processes, such as whether it is accurate, complete, relevant and appropriately associated with a particular entity.
A third area of concern is inappropriate assumptions that the inferencing processes embodied in AI are appropriate to the purpose, suitable for all of the highly diverse circumstances that arise in real-world applications, and have access to suitable data. Where such assumptions are wrong, the realisation of the anticipated benefits is in doubt, harm is more likely to arise, and the exercise of control is compromised.
A fourth problem is the opaqueness of many AI processes. The absence of a humanly understandable explanation of how inferences are drawn precludes understanding, replication, audit, and the recognition and correction of errors. Decision processes are tending to become simplistically empirical rather than demonstrably rational. This undermines the accountability of organisations that apply AI, and hence the buck stops in the middle of nowhere. The fifth issue is the irresponsibility shown by organisations in all phases of the AI supply chain or industry network in relation to the safety and efficacy of the technologies they conceive, the artefacts they invent, and the systems that they develop, disseminate and apply.
Even after discounting the ill-informed aspects of public nervousness, there remains a substantial core of real concerns about AI. So it's entirely reasonable that there is push-back in the media, and from the public, regulators and law-makers. Some of the problems are outside the scope of engineers and computer scientists, but others fall squarely within their responsibility. Professional associations have pondered deep and long (see, in particular, IEEE 2017), but they continue to be very slow to deliver concise, actionable sets of principles to their members. What guidance is available to inventors, designers and implementors about how to identify and manage negative impacts, and exercise control, at the same time as delivering on AI's potential benefits?
Given the wide range of challenges, the three basic principles concerning benefits, harm and control do not provide an adequate basis for analysis. Professionals need more fully articulated guidance. This article introduces and provides access to a set of '50 Principles for Responsible AI'.
I was stimulated to develop this set of principles because of the number and diversity of proposals that were being published. Each has its own purposes, reflecting the nature of the sponsoring organisation. Corporations (such as Google, IBM, Microsoft and Sony) wish to project an aura of social responsibility. Industry associations (e.g. BSI, FLI, ITIC and Partnership on AI), supported by government agencies responsible for stimulating the economy (e.g. Departments of Industry), wish to hold off formal regulation. Civil society (e.g. the Internet Society, The Public Voice) and government agencies whose concerns include human rights or consumers' interests (e.g. the European Parliament and the European Commission) wish to balance economic and social interests. Each has a legitimate voice, and each document contained at least some ideas of value to both professionals seeking guidance and the economy and society as a whole.
I asssembled a suite of 30 such documents. I then extracted the principles that they contained, identified commonalities, and built a super-set that encompasses them all. The resulting set of 50 Principles is presented as an Appendix to this article. It is in a form that can be metaphorically and even physically pinned to the cubicle wall. Access is provided to the working papers. These list and provide access to the 30 source documents, and show the sources on which each Principle was based. The expression of each Principle can be probed and challenged. Alternative expressions can be formulated. Interpretations can be proposed that offer more precise guidance in relation to particular forms of AI, particular applications of AI, and particular segments of the AI industry.
The set might have been redundant if any of the 30 sources already covered the field satisfactorily. The coverage proved to be generally very thin, however. Almost all of the sources contain less than half of the complete set, and many of them are very limited in scope. The only authoritative document that was found to have substantial coverage was that of the European Commission (EC 2019), and even that only scores 74%. For professionals whose focus is on goods and services specifically for EU countries, or for export, including to EU countries, the EC's document may be more directly relevant than the 50 Principles presented here.
The set of 50 Principles provides guidance to professionals when they apply AI. The document can stimulate discussions within teams, and represents a reference-point when differences of opinion arise. On the other hand, neither this nor any other document can itself overcome the enormous array of challenges that confronts professionals. This section identifies aspects of the state of mind that needs to be brought to its use.
A great deal of diversity is evident in AI technologies, in the artefacts that AI is built into, in the systems that incorporate the artefacts, and in the uses to which the systems are put. In such circumstances, it is infeasible to formulate a general recipe or an industry Standard that can be applied by technicians. The appropriate form of guidance is a set of imperative statements that hover between the abstract and the concrete, and that require interpretation and application by professionals.
An example of a general Principle that can't be expressed in a way that applies equally well in every situation is "Ensure human control over autonomous behaviour of AI-based technology, artefacts and systems" (3.2). A line needs to be drawn somewhere between real-time decision-making, such as the equilibration of a vehicle's steering on uneven surfaces, or the tuning of the fuel-mix delivered into an engine, and deliberative decisions, such as the selection of a target, or determination of the guilt or innocence of a defendant.
Similarly, "Implement safeguards to avoid, prevent and mitigate negative impacts and implications" (4.4) has to be operationalised in each particular context, because designs need to exhibit as much variety as the technologies, the artefacts and systems, and the circumstances in which they're applied.
There are many conflicts between these people-protective Principles and the interests of other stakeholders. The organisations that sponsor the development of AI technology or of AI-based artefacts or systems can usually protect their own interests, but many other organisations may be involved, and there are impacts on the economy, society as a whole, and the environment.
Conflicts even arise within the 50 Principles. For example, it can be difficult to both "Ensure ... performance of intended functions" (7.1) and "Ensure people's physical health and safety ('nonmaleficence')" (4.1). To invoke a classic in the field, in the very first of Asimov's short stories in his long series on the Laws of Robotics, 'Robbie the Robot' caused (minor) harm to the robot's owner, because she was knocked breathless by the act of getting her out of the path of a tractor (Asimov 1940).
The resolution of conflicts involves value judgements. Some fall within the designer's sphere of responsibility; but the big calls also involve consultation with stakeholders and regulators, and executive judgement.
The Principles need to be interpreted differently in different parts of the supply chain or industry network. A simple model of industry structure distinguishes research, invention, innovation, dissemination and application phases, with researchers producing AI technology, inventors using the ideas to produce AI-based artefacts, developers incorporating the artefacts within AI-based systems, purveyors of various kinds establishing an installed base of such systems, and user organisations applying them, resulting in impacts and implications.
The responsibilities of professionals playing roles higher up the supply-chain differ from the responsibilities of designers close to the applications end of the business. For example, "Deliver transparency and auditability" (7.) calls for purveyors and users to ensure understandability by affected stakeholders of each inference, decision and action that arises from AI-based systems. Higher up the chain, on the other hand, the Principle requires that inventors and innovators design transparency into AI technology, and ensure that developers and users of AI-based artefacts and systems can readily understand the nature of the underlying technology, and have the means available to them to devise ways of fulfilling their own transparency and auditability obligations.
A further challenge arises from the clumping together of rather different things under the banner of 'AI'. Many of the Principles are of greater significance in relation to some forms of AI than others; and many would therefore be more directly useful to professionals if they were re-phrased, re-framed or customised to one particular form of AI.
For many people, the most obvious technological threat is robotics. Robotics is migrating well beyond factory floors and automated warehouses, including to aircraft and watercraft. The large majority of the Principles are directly applicable to designers of autonomous vehicles and their support systems, both in controlled environments such as mines, quarries and dedicated bus routes, and where the vehicle is on the open road or in public airspace. On the other hand, the Principle "Design as an aid, for augmentation, collaboration and inter-operability" (2.1) is readily understood in the context of high-order functions such as mission control, anticipation of changes in weather conditions and collision-risk detection; whereas considerable care is needed when it is interpreted in the context of real-time control over a vehicle's attitude, position, course and collision-avoidance functions.
A second area usefully regarded as being within the AI field is cyborgisation, the process of enhancing individual humans by technological means, resulting in hybrids of a human and one or more artefacts. Many forms of cyborgisation fall outside the field of AI, such as spectacles, implanted lenses, stents, inert hip-replacements and SCUBA gear. Some enhancements qualify, however, by combining sensors and actuators with computational intelligence. Prominent examples include heart pacemakers (since 1958) and cochlear implants (since the 1960s, and commercially since 1978). Another is the kinds of replacement legs for above-knee amputees that contain software to sustain balance within the knee-joint.
The Principle "Respect each person's autonomy, freedom of choice and right to self-determination" (3.4) is writ particularly large in the context of cyborgisation, i.e. free and informed consent is a pre-condition. On the other hand, "Ensure human review of inferences and decisions prior to action being taken" (3.5) requires careful interpretation, in order to balance the need for very prompt action against the need for individual self-determination. Another Principle requiring care in its application is "Avoid services being conditional on the acceptance of AI-based artefacts and systems" (3.7), which may be in direct conflict with the express desire of an applicant for a life-sustaining heart-pacemaker.
It might reasonably be expected that the medical implants field would provide leading exemplars of well-articulated guidance relevant to cyborgisation. However, studies conducted by investigative journalists and published in reputable media outlets have thrown considerable doubt on the effectiveness of the processes used to protect patients' interests and assure quality and safety (ICIJ 2019).
Two other forms of AI to which the 50 Principles are applicable are the relatively established area of rule-based expert systems, and the still-immature field variously referred to as neural networking or AI/ML. Unlike predecessor approaches such as algorithmic / procedural programming, the rule-based expert systems field embodies no conception of either a problem or a solution. A rule-base merely describes a problem-domain. Techniques in the machine-learning field differ even more from earlier approaches, in that they do not necessarily begin with active and careful modelling of a real-world problem-solution, problem or even problem-domain. Rather than comprising a set of entities and relationships that mirrors the key elements and processes of a real-world system, a neural network model may be simply a list of input variables and a list of output variables (and, in the case of 'deep' networks, intermediary variables). Moreover, the weightings assigned to each connection reflect the particular learning algorithm that was applied, and the characteristics of the training-set that was fed into it.
When these forms of AI are in play, a critical issue is how compliance can be achieved with the Principle "Ensure that people ... have access to humanly-understandable explanations of how [inferences, decisions and actions] came about" (6.3).
Further challenges arise from the need to "Ensure that inferences are not drawn from data using invalid or unvalidated techniques" (7.6), "Test result validity ..." (7.7), and "Impose controls in order to ensure that the safeguards are in place and effective" (7.8). It is also unclear how neural networking techniques can enable performance of "audits of the justification, the proportionality, the transparency ..." (8.3). Given that these are all existing features of systems that AI/ML is seeking to supplant, it's problematical, not to mention morally dubious, for designers to argue that these Principles aren't applicable, merely because they're difficult to comply with.
The litany of problems arising with learning algorithms, training-sets, data accuracy, suitability and compatibility of data definitions, obscure and even non-existent explanations, lack of validation, lack of safeguards, and lack of controls to ensure that the safeguards are working, represents ample reason for public disquiet about when, how and even if AI/ML should be applied to human affairs.
This brief review has of course merely scratched the surface of the 50 Principles and their application to AI. It has, however, brought to light the need for them to be not blindly but intelligently applied to the particular technology and in the particular context; and it has illustrated how public distrust of some AI technologies and AI-based artefacts and systems is not misguided, but grounded in reality.
A set of principles is no more than a checklist. A further key element of the framework needed for assuring responsible AI is a process within which the Principles can be used to guide design. The Principles imply this by saying "conduct impact assessment ..." (1.4). Impact assessment is an approach that originated in environmental contexts. It adopts perspectives different from or additional to those of the organisation that drives the initiative. It is accordingly commonly conducted by other organisations, outside the boundaries of the driving organisation.
Within the driving organisation, the well-established tools of risk assessment and risk management might fill this role. However, their conventional forms are inadequate for the purpose. This is because they adopt the perspective only of the organisation that is sponsoring the activity. The process includes the identification of stakeholders, but their interests are reflected only to the extent that harm to them may result in material harm to the sponsoring organisation.
Responsible application of AI is only possible if stakeholder analysis is undertaken in order not only to identify the categories of entities that may be affected by the particular project, but also to gain insight into those entities' needs and interests. Note too that the notion of a 'stakeholder' goes beyond the users of and participants in the system. People may be affected by a system, and therefore have a stake in it, even if they are what are sometimes called 'usees', outside the system. This arises, for example, in the operation of credit-reporting agencies, tenancy and insurance claim data-pools, and criminal intelligence systems. Users' families, their local communities, population segments and whole economic regions can be usees as well. In the context of autonomous vehicles, for example, stakeholders comprise a range of users, including individuals whose livelihood is threatened (e.g. taxi-, courier- and truck-drivers), their employers, their unions, vehicle-owners and transport-service-providers; together with usee categories, including passengers, occupants of other vehicles, pedestrians, and drivers' dependants.
Risk assessment processes need to be conducted from the perspective of each stakeholder group, to complement that undertaken from the organisation's perspective. The various, and inevitably in part conflicting, information needs to be then integrated, in order to deliver a multi-stakeholder risk management plan (Clarke 2019b).
This article has presented a set of Principles for Responsible AI derived by consolidating those proposed by a diverse assortment of organisations. The set therefore has at least superficial validity as a proxy for what 'the public as a whole' thinks AI needs to look like. Attention was drawn to the only one of the 30 source documents that has moderately good coverage of the field (EC 2019).
The 50 Principles represent guidance to professionals in the development and application of AI. They are capable of being further adapted, in order to relate more directly to particular forms of AI, to particular areas of application, and to particular segments of the AI industry.
The brief analysis possible within this short article has shown ways in which the 50 Principles can be applied, and can be fitted within a risk assessment process. It has also brought to the surface a few of the challenges that AI researchers, inventors, developers, purveyors and users must address in order to satisfy the needs of stakeholders, and overcome the concerns of the general public, and of legislators, regulators, financiers and insurers.
Once technologies and process standards have matured, the 50 Principles for Responsible AI will be redundant, replaced by more specific formulations for each category of what is currently termed AI. However, given that the first 60 years of the AI field have seen so little progress in establishing such guidance, the set presented here may be 'as good as it gets' for some time to come.
Also available is a 2-page PDF version, suitable for printing
AI offers prospects of considerable benefits and disbenefits. All entities involved in creating and applying AI have obligations to assess its short-term impacts and longer-term implications, to demonstrate the achievability of the postulated benefits, to be proactive in relation to disbenefits, and to involve stakeholders in the process.
1.1 Conceive and design only after ensuring adequate understanding of purposes and contexts
1.2 Justify objectives
1.3 Demonstrate the achievability of postulated benefits
1.4 Conduct impact assessment, including risk assessment from all stakeholders' perspectives
1.5 Publish sufficient information to stakeholders to enable them to conduct their own assessments
1.6 Conduct consultation with stakeholders and enable their participation in design
1.7 Reflect stakeholders' justified concerns in the design
1.8 Justify negative impacts on individuals ('proportionality')
1.9 Consider alternative, less harmful ways of achieving the same objectives
Considerable public disquiet exists in relation to the replacement of human decision-making by inhumane decision-making by AI-based artefacts and systems, and displacement of human workers by AI-based artefacts and systems.
2.1 Design as an aid, for augmentation, collaboration and inter-operability
2.2 Avoid design for replacement of people by independent artefacts or systems, except in circumstances in which those artefacts or systems are demonstrably more capable than people, and even then ensuring that the result is complementary to human capabilities
Considerable public disquiet exists in relation to the prospect of humans being subject to obscure AI-based processes, and ceding power to AI-based artefacts and systems.
3.1 Ensure human control over AI-based technology, artefacts and systems
3.2 In particular, ensure human control over autonomous behaviour of AI-based technology, artefacts and systems
3.3 Respect people's expectations in relation to personal data protections, including their awareness of data-usage, their consent, data minimisation, public visibility and design consultation and participation, and the relationship between data-usage and the data's original purpose
3.4 Respect each person's autonomy, freedom of choice and right to self-determination
3.5 Ensure human review of inferences and decisions prior to action being taken
3.6 Avoid deception of humans
3.7 Avoid services being conditional on the acceptance of AI-based artefacts and systems
All entities involved in creating and applying AI have obligations to provide safeguards for all human stakeholders, whether as users of AI-based artefacts and systems, or as usees affected by them, and to contribute to human stakeholders' wellbeing.
4.1 Ensure people's physical health and safety ('nonmaleficence')
4.2 Ensure people's psychological safety, by avoiding negative effects on their mental health, emotional state, inclusion in society, worth, and standing in comparison with other people
4.3 Contribute to people's wellbeing ('beneficence')
4.4 Implement safeguards to avoid, prevent and mitigate negative impacts and implications
4.5 Avoid violation of trust
4.6 Avoid the manipulation of vulnerable people, e.g. by taking advantage of individuals' tendencies to addictions such as gambling, and to letting pleasure overrule rationality
All entities involved in creating and applying AI have obligations to avoid, prevent and mitigate negative impacts on individuals, and to promote the interests of individuals.
5.1 Be just / fair / impartial, treat individuals equally, and avoid unfair discrimination and bias, not only where they are illegal, but also where they are materially inconsistent with public expectations
5.2 Ensure compliance with human rights laws
5.3 Avoid restrictions on, and promote, people's freedom of movement
5.4 Avoid interference with, and promote privacy, family, home or reputation
5.5 Avoid interference with, and promote, the rights of freedom of information, opinion and expression, of freedom of assembly, of freedom of association, of freedom to participate in public affairs, and of freedom to access public services
5.6 Where interference with human values or human rights is outweighed by other factors, ensure that the interference is no greater than is justified ('harm minimisation')
All entities have obligations in relation to due process and procedural fairness. These obligations can only be fulfilled if all entities involved in creating and applying AI ensure that humanly-understandable explanations are available to the people affected by AI-based inferences, decisions and actions.
6.1 Ensure that the fact that a process is AI-based is transparent to all stakeholders
6.2 Ensure that data provenance, and the means whereby inferences are drawn from it, decisions are made, and actions are taken, are logged and can be reconstructed
6.3 Ensure that people are aware of inferences, decisions and actions that affect them, and have access to humanly-understandable explanations of how they came about
All entities involved in creating and applying AI have obligations in relation to the quality of business processes, products and outcomes.
7.1 Ensure effective, efficient and adaptive performance of intended functions
7.2 Ensure data quality and data relevance
7.3 Justify the use of data, commensurate with each data-item's sensitivity
7.4 Ensure security safeguards against inappropriate data access, modification and deletion, commensurate with its sensitivity
7.5 Deal fairly with people ('faithfulness', 'fidelity')
7.6 Ensure that inferences are not drawn from data using invalid or unvalidated techniques
7.7 Test result validity, and address the problems that are detected
7.8 Impose controls in order to ensure that the safeguards are in place and effective
7.9 Conduct audits of safeguards and controls
All entities involved in creating and applying AI have obligations to ensure resistance to malfunctions (robustness) and recoverability when malfunctions occur (resilience), commensurate with the significance of the benefits, the data's sensitivity, and the potential for harm.
8.1 Deliver and sustain appropriate security safeguards against the risk of compromise of intended functions arising from both passive threats and active attacks, commensurate with the significance of the benefits and the potential to cause harm
8.2 Deliver and sustain appropriate security safeguards against the risk of inappropriate data access, modification and deletion, arising from both passive threats and active attacks, commensurate with the data's sensitivity
8.3 Conduct audits of the justification, the proportionality, the transparency, and the harm avoidance, prevention and mitigation measures and controls
8.4 Ensure resilience, in the sense of prompt and effective recovery from incidents
All entities involved in creating and applying AI have obligations in relation to due process and procedural fairness. The obligations include the entity ensuring that it is discoverable, and addressing problems as they arise.
9.1 Ensure that the responsible entity is apparent or can be readily discovered by any party
9.2 Ensure that effective remedies exist, in the form of complaints processes, appeals processes, and redress where harmful errors have occurred
Each entity's obligations in relation to due process and procedural fairness include the implementation of systematic problem-handling processes, and respect for and compliance with external problem-handling processes.
10.1 Ensure that complaints, appeals and redress processes operate effectively
10.2 Comply with external complaints, appeals and redress processes and outcomes, including, in particular, provision of timely, accurate and complete information relevant to cases
The two articles by the author provide access to c.70 references.
Asimov I. (1940) 'Robbie' first published in Super Science Stories, September 1940, republished in Asimov I. (1950) 'I Robot' Gnome Press, 1950
Clarke R. (2019a) 'Why the World Wants Controls over Artificial Intelligence' Computer Law & Security Review 35, 4 (2019) 423-433, PrePrint at http://www.rogerclarke.com/EC/AII.html
Clarke R. (2019b) 'Principles and Business Processes for Responsible AI' Computer Law & Security Review 35, 4 (2019) 410-422, PrePrint at http://www.rogerclarke.com/EC/AIP.html
EC (2019) 'Ethics Guidelines for Trustworthy AI' High-Level Expert Group on Artificial Intelligence, European Commission, April 2019, at https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
ICIJ (2019) 'Implant Files' International Consortium of Investigative Journalists, 2019, at https://www.icij.org/investigations/implant-files/
IEEE (2017) 'Ethically Aligned Design: The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems', Version 2. IEEE, December 2017. at http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html
The first two documents identify, extract and cite the 30 source documents, and the third relates the 30 source documents to the consolidated set of 50 Principles:
Ethical Analysis and Information Technology (8 Source Documents)
Principles for AI: A SourceBook' (22 Source Documents)
The 50 Principles Cross-Referenced to the Source-Documents
This paper has benefited from feedback from multiple colleagues, and particularly Prof. Graham Greenleaf and Kayleen Manwaring of UNSW, Robin Eckermann, and Peter Leonard.
Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.
From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.
Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916
Created: 11 July 2018 - Last Amended: 15 January 2020 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/RAIC.html