Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Responsible AI'

Principles for Responsible AI

Revision of 20 February 2019

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2018-19

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/EC/PRAI.html


Abstract

Considerable public disquiet exists about various forms and aspects of Artificial Intelligence (AI). This may easily erupt into public opposition, and could result in kneejerk regulatory measures harmful to progress in the field. Action has long been needed by professional and industry associations, but it has been very slow in arriving. A suite of 26 proposals published by highly diverse organisations was used as a basis for consolidating sets of 10 abstract and 50 more specific Principles. These offer concrete guidance for practitioners in the various phases of the AI supply chain.


Contents


1. Introduction

The last few years have seen another surge in interest and progress in Artifical Intelligence (AI). Several current technologies are widely considered to have the potential for significant impact. There is, however, nervousness among practitioners regarding the extent to which public concerns about negative impacts may hinder investment and deployment.

Journals, conferences and industry publications feature many discussions of the issues, and a variety of ethical analyses and proposals for 'principles for AI'. There are commonalities among the concerns and the proposals. However, even a cursory glance shows a great deal of diversity. Further, although many elements are evident, most of them appear in only a small proportion of the sources.

A comprehensive view of public expectations is highly desirable. In order to develop such a view, I identified a suite of 26 documents. Based on prior reading, I postulated a set of 10 themes. I then extracted propositions from the suite of documents, and allocated them to the themes. In order to achieve adequate cohesion and readability, I made adjustments to the themes, and to the expression of many of the elements.

The themes are presented in Table 2, identified as 'The 10 Principles', and the complete set of propositions is in Appendix 1, labelled 'The 50 Principles'. This article briefly reviews AI, provides background to the process of extracting, organising and expressing the Principles, discusses their nature, and identifies potential applications.


2. What's 'AI'?

Some of the differences among the sources used in the present study might be expected to arise from varying interpretations of the nature of AI. Conventionally in the AI field, 'intelligence' is exhibited by an artefact if it evidences perception and cognition of relevant aspects of its environment, has goals, and formulates actions towards the achievement of those goals (Albus 1991, Russell & Norvig 2003, McCarthy 2007). The term 'artificial' implies 'human-made', using a yardstick that is open to interpretation variously as 'equivalent to human' or 'superior to human'.

In practice, a great deal of AI is 'different from human', often in relation to both processes and outcomes. Artefacts and humans have comparative advantages over one another, so it can be argued that a more appropriate target than either 'equivalent' or 'superior' would be 'complementary intelligence': "information technologists [should] delineate the relationship between robots and people by applying the concept of decision structuredness to blend computer-based and human elements advantageously" (Clarke 1989, 1993, 2014). 'Complementary intelligence' would (1) do things well that humans do badly or cannot do at all; and (2) function as elements within systems that include both humans and artefacts, with effective, efficient and adaptable interfacing among them all.

A form of AI that containues to attract public attention is robotics. It has migrated well beyond the factory floor and warehouses, in such areas as control over the attitude, position and course of craft on or in water, and in the air. It is achieving market penetration in self-driving vehicles, variously on rails and otherwise, in controlled environments such as mines, quarries and dedicated bus routes, but recently also in more open environments. Further examples can be found in the 'Internet of Things' (IoT) movement, and related initiatives under such rubrics as 'smart houses' and 'smart cities'.

A second area usefully regarded as being within the AI field is cyborgisation, by which I mean the process of enhancing individual humans by technological means, resulting in hybrids of a human and one or more artefacts (Mann & Niedzviecki 2001, Clarke 2005, Warwick 2014). Many forms of cyborgisation fall outside the field of AI, such as spectacles, implanted lenses, stents, inert hip-replacements and SCUBA gear. However, a proportion of the artefacts used to enhance humans qualify, by combining sensors, computational or programmatic 'intelligence', and one or more actuators. Examples include heart pacemakers (since 1958), cochlear implants (since the 1960s, and commercially since 1978), and some replacement legs for above-knee amputees, in that the artificial knee contains software to sustain balance within the joint.

Two decades ago, rule-based expert systems had a high profile. Software that embodies sets of rules applies the rules to data, and draws inferences (Giarratano & Riley 1998). Unlike algorithmic or procedural approaches, rule-based expert systems embody no conception of either a problem or a solution. A rule-base merely describes a problem-domain (Clarke 1991).

During the last decade, the form of AI uppermost in many people's minds is neural networks, a data analytics technique within the machine learning area. This differs from previous approaches, in that it does not necessarily begin with active and careful modelling of a real-world problem-solution, problem or even problem-domain. Rather than comprising a set of entities and relationships that mirrors the key elements and processes of a real-world system, a neural network model may be simply a list of input variables and a list of output variables (and, in the case of 'deep' networks, intermediary variables). The weightings imputed for each connection reflect the characteristics of the training-set that was fed in, and of the particular learning algorithm that was imposed on the training-set. These features combine with questions about the semantics, selectivity, accuracy and compatibility of the data to give rise to uncertainty about the technique's degree of affinity with the real world to which it is applied.

These four examples of AI evidence enormous diversity, raising questions about the extent to which generic proposals can effectively address all forms of AI, or whether it is more appropriate to establish distinct sets of principles, or in legal terms, sui generis regulation. On the other hand, there are common features. All four forms involve software drawing inferences, in some cases making decisions, and in some of those cases taking action. In addition, all do this by means that are obscure, and that do not readily support humanly-understandable explanations.

The field might be better characterised by adopting a term that reflects the change in emphasis. The term 'robotics' has been associated with the idea of 'machines that think'. An alternative term, such as 'intellectics', would have the advantage of instead implying 'computers that do'. Sensor-computer-actuator packages are now generating a strong impulse for action to be taken in and on the real world. The new world of intellectics sees artefacts at the very least communicating a recommendation to a human, but sometimes generating a default-decision that is subject to being countermanded or overridden by a human, and even acting autonomously based on the inferences they have drawn.

In the near future, it may be worthwhile re-casting the propositions discussed below to address 'complementary intelligence' and 'intellectics'. Currently, however, the mainstream discussion is about 'AI', and the remainder of this article reflects that norm.


3. Responsibilities Within the AI Supply Chain

Robots act directly on the world, with varying degrees of autonomy. Expert systems and neural nets at least recommend, and even decide. Cyborgisation involves interference with the human body, possibly consensually but possibly not, and with varying degrees of controllability by the individual concerned. Many people reasonably enough feel discomfort about developments of such kinds, and some did even at the dawn of the robotics era: "every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us" (Wiener, 1949, quoted in Markoff 2013). If and when homo sapiens cedes power to homo roboticus and/or roboticus sapiens, it may involve a conscious act by humanity, but it might merely be the culmination of a long series of seemingly small decisions by technocrats (Menzel & D'Alusio 2001, Clarke 2014).

If the discomfort felt by people is to be addressed, and if such substantive problems as actually exist are to be solved, who bears what responsibilities? Discussion about responsibility for AI is often clouded by inadequate discrimination among the successive phases of the supply-chain from laboratory experiment to deployment in the field, and by failure to assign responsibilities to the categories of entities that are active in each phase.

In Table 1, the distinction is made among technology, artefacts that embody the technology, systems that incorporate the artefacts, the process of dissemination of artefacts and systems, and their application. If a mode of such a kind is adopted, then legal and moral responsibilities can be assigned in each phase, respectively to researchers, to inventors, to innovators, to purveyors, and to users.

Table 1: Entities with Responsibilities in Relation to AI

Phase
Result
Direct
Responsibility
Research
AI Technology
Researchers
Invention
AI-Based Artefacts
IR&D Engineers
Innovation
AI-Based Systems
Developers
Dissemination
Installed AI-Based Systems
Purveyors
Application
Impacts and Implications
User Organisations and Individuals


4. The Process

Many organisations have published documents acknowledging public concerns about AI. The 'Principles for Responsible AI' presented below arise from a consolidation of ideas from a suite of 26 documents that have gone beyond problems and contributed towards solutions. The suite was assembled by surveying academic, professional and policy literatures. Diversity of perspective was actively sought. The sources were governmental organisations (7), non-government organisations (6), corporations and industry associations (5), professional associations (2), joint associations (2), and academics (4). Only sets that are available in the english language were used. An analysis by region of origin shows 5 pan-world, 4 pan-European, 10 US, 4 UK, 1 Australian, 1 Japanese and 1 Korean. Of the documents, 8 are formulations of 'ethical principles and IT', and the other 18 provide guidance specifically in relation to AI.

In the previous section, distinctions were drawn among the responsible entities in the successive phases of the supply-chain. In only a few of the 26 documents in the suite were such distinctions evident, and in most cases it has to be interpolated which part of the supply-chain the document is intended to address. The European Parliament (CLA-EP 2016) refers to "design, implementation, dissemination and use", IEEE (2017) to "Manufacturers / operators / owners", GEFA (2016) to "manufacturers, programmers or operators", FLI (2017) to researchers, designers, developers and builders, and ACM (2017) to "Owners, designers, builders, users, and other stakeholders". Remarkably, however, in all of these cases the distinctions were only made within a single Principle rather than being applied to the set as a whole.

Some commonalities exist across the source documents. However, many of them contain only a few propositions, and overall there is far less consensus that might be expected more than 60 years after AI was first heralded. For example, only 1 document expressly encompasses cyborgisation (GEFA 2016); only 2 documents refer to the precautionary principle (CLA-EP 2016, GEFA 2016).

The analysis adopted a conservative approach, whereby a document was scored against a Principle if the idea was in some way evident, even if its coverage of the Principle as a whole was limited. Yet, on average, each Principle was only reflected in 5 of the 26 documents. The most recent document evidenced the largest percentage of the 50 Principles - the European Commission's Draft Ethics Guidelines for Trustworthy AI (EC 2018) - but even that only mustered 28/50 (56%).

It was also striking how few of the 50 Principles were detectable in the majority of the documents. Only 8/26 stipulated 'Conduct impact assessment' (Principle 1.4). Even 'Ensure people's wellbeing ('beneficence')' (4.3) was evident in only 12/26, and only the following four achieved at least half, three of them only just:

Each of the sources naturally reflects the express, implicit and subliminal purposes of the drafters and the organisations on whose behalf they were composed. In some cases, for example, the set primarily addresses just one form of AI, such as robotics or machine-learning. Documents prepared by corporations, industry associations, and even professional associations and joint associations tended to adopt the perspective of producer roles, with the interests of other stakeholders often relegated to a secondary consideration. For example, the joint-association Future Life Institute perceives the need for "constructive and healthy exchange between AI researchers and policy-makers", but not for any participation by stakeholders (FLI 2017 at 3). As a result, transparency is constrained to a small sub-set of circumstances (at 6). The responsibility of 'designers and builders' is limited by those roles being nominated as mere 'stakeholders in moral implications' (at 9). Alignment with human values is seen as being necessary only in respect of "highly autonomous AI systems" (at 10). And "strict safety and control measures" are limited to a small sub-set of AI systems (at 22).

The authors of ITIC (2017) consider that many responsibilities lie elsewhere, and assigns responsibilities to its members only in respect of safety, controllability and data quality. ACM (2017) is expressed in weak language (should be aware of, should encourage, are encouraged) and regards decision opaqueness as being acceptable, while IEEE (2017) suggests a range of important tasks for other parties (standards-setters, regulators, legislatures, courts), and phrases other suggestions in the passive voice, with the result that few obligations are clearly identified as falling on engineering professionals and the organisations that employ them. The House of Lords report might have been expected to adopt a societal or multi-stakeholder approach, yet, as favourably reported in Smith (2018), it appears to have adopted the perspective of the AI industry.

Similarly, ITIC (2017) considers that many responsibilities lie elsewhere, and assigns responsibilities to its members only in respect of safety, controllability and data quality. ACM (2017) is expressed in weak language (should be aware of, should encourage, are encouraged) and decision opaqueness is regarded as being acceptable, while IEEE (2017) suggests a range of important tasks for other parties (standards-setters, regulators, legislatures, courts), and phrases other suggestions in the passive voice, with the result that few obligations are clearly identified as falling on engineering professionals and the organisations that employ them. The House of Lords report might have been expected to adopt a societal or multi-stakeholder approach, yet, as reinforced by Smith (2018), it adopts the perspective of the AI industry.

The process of developing the Principles commenced with a set of themes derived from my prior background supplemented by first-pass reading of the selected documents. The documents were then inspected in greater detail. Propositions within each set were identified, extracted, and allocated to themes, maintaining back-references to the sources. Where items threw doubt on the structure or formulation of the general themes, the schema was adapted. Where similar points were expressed in varying ways, forms of words were selected or developed in order to sustain coherence and limit the extent to which the final set contains duplications. Of course, no claim is made that the selection of source-documents is complete or representative, or that the interpretations and the expressions used are the sole possibility, or even necessarily the most appropriate alternative.


5. The Principles

All themes, and all detailed propositions, have been expressed in imperative mode, i.e. in the form of instructions, in order to convey that they require action, rather than being merely desirable characteristics, or factors to be considered, or issues to be debated. In this form, the themes and propositions serve as a foundation for guiding and evaluating behaviour. They have therefore been referred to as 'Principles', in the sense of "a fundamental motive or reason for action, esp. one consciously recognized and followed" (OED 4a). The full set, in Appendix 1, is referred to as 'The 50 Principles', and the summary set, presented in Table 2, as 'The 10 Principles'.

Table 2: Responsible A.I. Technologies, Artefacts, Systems and Applications
The 10 Principles

The following Principles apply to each entity responsible for each phase of AI research, invention, innovation, dissemination and application.

1. Assess Positive and Negative Impacts and Implications

AI offers prospects of considerable benefits and disbenefits. All entities involved in creating and applying AI have legal and moral obligations to assess the short-term impacts and longer-term implications, to demonstrate the achievability of the postulated benefits, to be proactive in relation to disbenefits, and to involve stakeholders in the process.

2. Complement Humans

Considerable public disquiet exists in relation to the replacement of human decision-making with inhumane decision-making by AI-based artefacts and systems, and displacement of human workers by AI-based artefacts and systems.

3. Ensure Human Control

Considerable public disquiet exists in relation to the prospect of humans being subject to obscure AI-based processes, and ceding power to AI-based artefacts and systems.

4. Ensure Human Safety and Wellbeing

All entities involved in creating and applying AI have legal and moral obligations to provide safeguards for all human stakeholders, whether as users of AI-based artefacts and systems, or as usees affected by them, and to contribute to human stakeholders' wellbeing.

5. Ensure Consistency with Human Values and Human Rights

All entities involved in creating and applying AI have legal and moral obligations to avoid, prevent and mitigate negative impacts on, and to promote the interests of, individuals.

6. Deliver Transparency and Auditability

All entities have legal and moral obligations in relation to due process and procedural fairness. These obligations can only be fulfilled if all entities involved in creating and applying AI ensure that humanly-understandable explanations are available to the people affected by AI-based inferences, decisions and actions.

7. Embed Quality Assurance

All entities involved in creating and applying AI have legal and moral obligations in relation to the quality of business processes, products and outcomes.

8. Exhibit Robustness and Resilience

All entities involved in creating and applying AI have legal and moral obligations to ensure resistance to malfunctions (robustness) and recoverability when malfunctions occur (resilience), commensurate with the significance of the benefits, the data's sensitivity, and the potential for harm.

9. Ensure Accountability for Legal and Moral Obligations

All entities involved in creating and applying AI have legal and moral obligations in relation to due process and procedural fairness. These obligations can only be fulfilled if each entity is discoverable, and each entity addresses problems as they arise.

10. Enforce, and Accept Enforcement of, Liabilities and Sanctions

All entities involved in creating and applying AI have legal and moral obligations in relation to due process and procedural fairness. These obligations can only be fulfilled if the entity implements problem-handling processes, and respects and complies with external problem-handling processes.

--------------

In order to facilitate audit and re-analysis, access is provided to supporting materials (Clarke 2018b), comprising citations of, and extracts from, the 26 sources (Parts 1 and 2), and a version of 'The 50 Principles' that includes back-references to the sources (Part 3). A list of items is also provided that appear in source documents that have not been included in 'The 50 Principles' (Part 4). This is variously because they involve imprecise abstractions that are difficult to operationalise (e.g. 'human dignity', 'fairness' and 'justice'), or they fall outside the scope of the present work.

Each of the Principles requires somewhat different application in each phase of the AI supply-chain. An important example of this is the manner in which Principle 7 (Deliver Transparency and Auditability) is intended to be interpreted. In the Research and Invention phases of the technological life-cycle, compliance with Principle 7 requires understanding by inventors and innovators of the AI technology, and explicability to developers and users of AI-based artefacts and systems. During the Innovation and Dissemination phases, the need is for understandability and manageability by developers and users of AI-based systems and applications, and explicability to affected stakeholders. In the Application phase, the emphasis shifts to understandability by affected stakeholders of inferences, decisions and actions arising from at least the AI elements within AI-based systems and applications.

The status of the proposed principles is important to appreciate. They are not expressions of law - although in some jurisdictions, and in some circumstances, some may of course be legal requirements. They are expressions of moral obligations; but no authority exists that can formally impose such obligations. All are contestable. They represent guidance to organisations involved in AI as to the expectations of courts, regulatory agencies, oversight agencies, competitors and stakeholders. They need to be taken into account as organisations undertake risk assessment and risk management.

In many circumstances, the Principles will be in conflict with other legal or moral obligations, and with various interests of various stakeholders. It can be argued, however, that each Principle creates an onus on each responsible entity. On that reading, each entity needs to ensure that the result of its endeavours is compliant, or that, to the extent that it is non-compliant, the organisation documents the factors that militate against compliance, documents the basis for its judgement that the importance of those factors outweighs that of the Principle, and diverts from the Principle only to the extent justified by those factors.


6. Application of the Principles

The Principles are intentionally framed and phrased in an abstract manner, in an endeavour to achieve general applicability. However, they lend themselves to customisation, and along multiple dimensions.

Firstly, as presaged in the earlier discussion of Principle 7, the language can be adapted to apply more precisely and clearly to each of the five phases of the AI supply chain. For example, the direct responsibilities of researchers and their employing institutions relate to the origination and potentialities of technologies, and the Principles can be phrased in ways familiar in that sector; whereas purveyors and users of AI-based artefacts and systems use the dialects of business and government, focus on application in the real world, and must directly account for impacts on individuals. However, each category of entities creates risks, and hence obligations exist along the entire AI supply chain. For any highly impactful technology, whether it is nuclear energy or AI, some responsibilities reach all the way to Phase 1, and to researchers.

Secondly, it will be beneficial for versions to be established that are specifically applicable to each form of AI. This could be prioritised for currently mainstream forms such as robotics, particularly remote-controlled and self-driving vehicles; cyborgisation; and AI/ML / neural-networking applications. I have previously proposed 'Guidelines for Responsible Data Analytics' (Clarke 2018a), which are relevant to all forms of data analytics projects, including those that apply neural-networking approaches. Such guidance documents will benefit from reconsideration in light of these Principles for Responsible AI.

The 'Principles for AI' are also capable of being further articulated into much more specific guidance in respect of sub-categories of AI technologies, artefacts, systems and applications. For example, sets could be spawned for each of vehicle collision-avoidance capabilities, specific prosthethic devices, and neural-network-based creditworthiness scoring schemes.

In addition to such primary uses of the Principles, they can be used as a basis for the ex post facto evaluation of existing technologies, artefacts, systems and applications. Further, independently-developed bodies of principles and guidelines can be compared with this set, in order to assess their comprehensiveness. That is of course not intended to imply that this set, based as it is on analysis of a mere 26 documents, is an uncontestable authority. Variances between sets may well lead to proposals for adaptation of this set, rather than to the deprecation of the alternative set. Finally, the Principles might require modest re-casting in order to be readily applicable to what I proposed above as more appropriate conceptualisations of the field - complementary intelligence and intellectics.


Appendix 1: The 50 Principles

These are presented in a 2-page PDF format.


References

ACM (2017) 'Statement on Algorithmic Transparency and Accountability' Association for Computing Machinery, January 2017, at https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf

Albus J. S. (1991) 'Outline for a theory of intelligence' IEEE Trans. Systems, Man and Cybernetics 21, 3 (1991) 473-509, at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.410.9719&rep=rep1&type=pdf

Clarke R. (1989) 'Knowledge-Based Expert Systems: Risk Factors and Potentially Profitable Application Area', Xamax Consultancy Pty Ltd, January 1989, at http://www.rogerclarke.com/SOS/KBTE.html

Clarke R. (1991) 'A Contingency Approach to the Application Software Generations' Database 22, 3 (Summer 1991) 23 - 34, PrePrint at http://www.rogerclarke.com/SOS/SwareGenns.html

Clarke R. (1993) 'Asimov's Laws of Robotics: Implications for Information Technology' in two parts, in IEEE Computer 26,12 (December 1993) 53-61, and 27,1 (January 1994) 57-66, at http://www.rogerclarke.com/SOS/Asimov.html

Clarke R. (2005) 'Human-Artefact Hybridisation: Forms and Consequences' Proc. Ars Electronica 2005 Symposium on Hybrid - Living in Paradox, Linz, Austria, 2-3 September 2005, PrePrint at http://www.rogerclarke.com/SOS/HAH0505.html

Clarke R. (2014) 'What Drones Inherit from Their Ancestors' Computer Law & Security Review 30, 3 (June 2014) 247-262, PrePrint at http://www.rogerclarke.com/SOS/Drones-I.html

Clarke R. (2018a) 'Guidelines for the Responsible Application of Data Analytics' Computer Law & Security Review 34, 3 (May-Jun 2018) 467- 476, PrePrint at http://www.rogerclarke.com/EC/GDA.html

Clarke R. (2018b) 'Principles for Responsible AI: Supporting Materials' Xamax Consultancy Pty Ltd, October 2018, at http://www.rogerclarke.com/EC/PRAI-SM.html

CLA-EP (2016) 'Recommendations on Civil Law Rules on Robotics' Committee on Legal Affairs of the European Parliament, 31 May 2016, at http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN

FLI (2017) 'Asilomar AI Principles' Future of Life Institute, January 2017, at https://futureoflife.org/ai-principles/?cn-reloaded=1

GEFA (2016) 'Position on Robotics and AI' The Greens / European Free Alliance Digital Working Group, November 2016, at https://juliareda.eu/wp-content/uploads/2017/02/Green-Digital-Working-Group-Position-on-Robotics-and-Artificial-Intelligence-2016-11-22.pdf

Giarratano J.C. & Riley G. (1998) 'Expert Systems' 3rd Ed., PWS Publishing Co. Boston, 1998

IEEE (2017) 'Ethically Aligned Design', Version 2. IEEE, December 2017. at http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html

ITIC (2017) 'AI Policy Principles' Information Technology Industry Council, undated but apparently of October 2017, at https://www.itic.org/resources/AI-Policy-Principles-FullReport2.pdf

McCarthy J. (2007) 'What is artificial intelligence?' Department of Computer Science, Stanford University, November 2007, at http://www-formal.stanford.edu/jmc/whatisai/node1.html

Mann S. & Niedzviecki H. (2001) 'Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer' Random House, 2001

Markoff J. (2013) 'In 1949, He Imagined an Age of Robots' The New York Times, 20 May 2013, at http://www.nytimes.com/2013/05/21/science/mit-scholars-1949-essay-on-machine-age-is-found.html

Menzel P. & D'Alusio F. (2001) 'Robo sapiens' MIT Press, 2001

Russell S.J. & Norvig P. (2009) 'Artificial Intelligence: A Modern Approach' Prentice Hall, 3rd edition, 2009

Smith R. (2018). '5 core principles to keep AI ethical'. World Economic Forum, 19 Apr 2018, at https://www.weforum.org/agenda/2018/04/keep-calm-and-make-ai-ethical/

Warwick K. (2014) 'The Cyborg Revolution' Nanoethics 8, 3 (Oct 2014) 263-273


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University. He has spent many years as a Board member and Chair of the Australian Privacy Foundation, and is Company Secretary of the Internet Society of Australia.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 11 July 2018 - Last Amended: 20 February 2019 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/PRAI.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy