Roger Clarke's Web-Site

 

© Xamax Consultancy Pty Ltd,  1995-2019


Roger Clarke's 'Evaluation of OECD's AI Guidelines'

The OECD's AI Guidelines of 22 May 2019:
Evaluation against a Consolidated Set of 50 Principles

Substantive Version of 26 May 2019

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2019

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/EC/AI-OECD-Eval.html


Background

During the period 2015-2020, Artificial Intelligence (AI) has been promoted with particular vigour. The nature of the technologies, combined with the sometimes quite wild enthusiasm of the technologies' proponents, have given rise to a considerable amount of public concern about AI's negative impacts. In an endeavour to calm those concerns, many organisations have published lists of 'principles for responsible AI'.

In Clarke (2019), a large set of such publications was analysed, resulting in a consolidated set of 10 Themes and 50 Principles. When compared against that consolidation, almost all documents to date are shown to have very limited scope.

As at mid-2019, the sole contender that scored respectably was EC (2019), and even that document only scored 37/50 (74%). In late May 2019, The OECD published 'Recommendations on AI' (OECD 2019). Does it measure up to the need?

The Evaluation Process

This document reports on an assessment of the OECD's document against the consolidated set of 50 Principles. The OECD's principles were examined, and elements were noted whose expression has similar meaning to those in the set of 50.

The information arising from the examination is as follows:

The Results

The raw result for the OECD's Principles was very low - only 20/50 (40%).

This can be interpreted in either a 'glass half-empty' or 'glass half-full' manner.

On the one hand, 40% is highly inadequate, especially given that so much literature on the topic was available at the time that the document was prepared, and that the EC had already published a document that achieves a 74% score. The weakness of the document strengthens the hand of corporations, industry associations and government agencies that want to apply AI without due care, and of countries that want to let them do so.

On the other hand, the OECD has always had a strong orientation towards economic development, with social needs treated more as a constraint than as an objective. In addition, the USA, which is an outlier on the international stage in the area of socially responsible behaviour by corporations, has always been a strong player in OECD negotiations. Taking these factors into account, a more positive view might be that the OECD's Principles include reasonably well-formulated expressions of a moderate proportion of the 50 Principles, and that the USA is a signatory to the document.

Interpretation

The history of the OECD Guidelines in relation to data protection is indicative of the tension between the US approach and that of the rest of the world. Despite enormous developments in technology, the OECD failed to build upon its once-leading Guidelines of 1980. The EU Directive of 1995, the Council of Europe Convention 108 of 1981, revised in 2001, and most recently the EC's GDPR which took effect in May 2018, have reflected public expectations and drive developments worldwide. The OECD's 2013 revision was a failure, and it is not clear that the current revision process will regain the OECD's lost momentum and influence in the field of data protection.

The inadequacy of the OECD AI principles means that only one formal government document exists worldwide, which scores reasonably well against the 50 Principles. The OECD's document fails to satisfy the need, countries that use it as a reference-point will continue to apply AI irresponsibly, and public opposition to, and negative media reports about, AI will multiply. It would appear that the OECD has again ceded dominance to the EU, in the AI field as in data protection.


Appendix 1: Principles for responsible stewardship of trustworthy AI

Annotated extract from OECD (2019)

Segments of text that correspond to elements of the 50 Principles are enclosed [[in double square brackets]], followed by cross-references to the relevant Principle (in brackets)

1. Inclusive growth, sustainable development and well-being

Stakeholders should proactively engage in responsible stewardship of trustworthy AI [[in pursuit of beneficial outcomes for people and the planet]] (4.3), such as [[augmenting human capabilities]] (2.1) and enhancing creativity, [[advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities]] (5.1), and protecting natural environments, thus [[invigorating]] inclusive growth, sustainable development and [[well-being]] (4.3).

2. Human-centred values and fairness

3. Transparency and explainability

AI Actors should commit to [[transparency and responsible disclosure]] (1.5) regarding AI systems. To this end, they should [[provide meaningful information]], appropriate to the context, and consistent with the state of art:

4. Robustness, security and safety

5. Accountability

AI actors should be [[accountable for the proper functioning of AI systems and for the respect of the above principles]] (9.2), based on their roles, the context, and consistent with the state of art.


Appendix 2: The 50 Principles for Responsible AI

Annotated extract from Clarke (2019)
See here for a PDF version of this Appendix

Principles that are evident in the OECD document (even if only partly covered, or weakly expressed) are in bold-face type, with the relevant segments of text from the OECD document shown in italics, followed by cross-references to the locations in which that text occurs

The following Principles apply to each entity responsible for each phase of AI research, invention, innovation, dissemination and application.

1. Assess Positive and Negative Impacts and Implications

1.1 Conceive and design only after ensuring adequate understanding of purposes and contexts

1.2 Justify objectives

1.3 Demonstrate the achievability of postulated benefits

1.4 Conduct impact assessment, including risk assessment from all stakeholders' perspectives
apply a systematic risk management approach ... on a continuous basis (4(c))

1.5 Publish sufficient information to stakeholders to enable them to conduct their own assessments
commit to transparency and responsible disclosure ... To this end, ... provide meaningful information ... to foster a general understanding of AI systems (3(i))

1.6 Conduct consultation with stakeholders and enable their participation in design

1.7 Reflect stakeholders' justified concerns in the design

1.8 Justify negative impacts on individuals ('proportionality')

1.9 Consider alternative, less harmful ways of achieving the same objectives

2. Complement Humans

2.1 Design as an aid, for augmentation, collaboration and inter-operability
in pursuit of beneficial outcomes for people ... such as augmenting human capabilities (1)

2.2 Avoid design for replacement of people by independent artefacts or systems, except in circumstances in which those artefacts or systems are demonstrably more capable than people, and even then ensuring that the result is complementary to human capabilities

3. Ensure Human Control

3.1 Ensure human control over AI-based technology, artefacts and systems

3.2 In particular, ensure human control over autonomous behaviour of AI-based technology, artefacts and systems

3.3 Respect people's expectations in relation to personal data protections, including:
* their awareness of data-usage
* their consent
* data minimisation
* public visibility and design consultation and participation
* the relationship between data-usage and the data's original purpose
respect the rule of law, human rights and democratic values ... These include ... privacy and data protection (2(a))
apply a systematic risk management approach ... on a continuous basis to address risks ... including privacy, digital security (4(c))

3.4 Respect each person's autonomy, freedom of choice and right to self-determination
respect the rule of law, human rights and democratic values ... These include freedom, dignity and autonomy (2(a))

3.5 Ensure human review of inferences and decisions prior to action being taken
implement mechanisms and safeguards, such as capacity for human determination (2(b))

3.6 Avoid deception of humans

3.7 Avoid services being conditional on the acceptance of AI-based artefacts and systems

4. Ensure Human Safety and Wellbeing

4.1 Ensure people's physical health and safety ('nonmaleficence')
AI systems should be ... safe throughout their entire lifecycle ... (4(a))
apply a systematic risk management approach ... on a continuous basis to address risks ... including ... safety (4(c))

4.2 Ensure people's psychological safety, by avoiding negative effects on their mental health, emotional state, inclusion in society, worth, and standing in comparison with other people

4.3 Contribute to people's wellbeing ('beneficence')
in pursuit of beneficial outcomes for people and the planet ... invigorating ... well-being (1)

4.4 Implement safeguards to avoid, prevent and mitigate negative impacts and implications
implement mechanisms and safeguards (2(b))

4.5 Avoid violation of trust

4.6 Avoid the manipulation of vulnerable people , e.g. by taking advantage of individuals' tendencies to addictions such as gambling, and to letting pleasure overrule rationality

5. Ensure Consistency with Human Values and Human Rights

5.1 Be just / fair / impartial, treat individuals equally, and avoid unfair discrimination and bias, not only where they are illegal, but also where they are materially inconsistent with public expectations
advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities (1)
respect the rule of law, human rights and democratic values ... These include ... non-discrimination and equality, diversity, fairness (2(a))
apply a systematic risk management approach ... on a continuous basis to address risks related to AI systems, including ... bias (4(c))

5.2 Ensure compliance with human rights laws
respect the rule of law, human rights (2(a))

5.3 Avoid restrictions on, and promote, people's freedom of movement

5.4 Avoid interference with, and promote privacy, family, home or reputation
respect the rule of law, human rights and democratic values ... These include ... privacy ... (2(a))
apply a systematic risk management approach ... on a continuous basis to address risks ... including privacy ... (4(c))

5.5 Avoid interference with, and promote, the rights of freedom of information, opinion and expression, of freedom of assembly, of freedom of association, of freedom to participate in public affairs, and of freedom to access public services

5.6 Where interference with human values or human rights is outweighed by other factors, ensure that the interference is no greater than is justified ('harm minimisation')

6. Deliver Transparency and Auditability

6.1 Ensure that the fact that a process is AI-based is transparent to all stakeholders
commit to transparency and responsible disclosure ... To this end, ... provide meaningful information ... to make stakeholders aware of their interactions with AI systems (3(ii))

6.2 Ensure that data provenance, and the means whereby inferences are drawn from it, decisions are made, and actions are taken, are logged and can be reconstructed
commit to transparency and responsible disclosure ... To this end, ... provide meaningful information ... to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision (3(iv))
ensure traceability, including in relation to datasets, processes and decisions ... to enable analysis of the AI system's outcomes and responses to inquiry (4(b))

6.3 Ensure that people are aware of inferences, decisions and actions that affect them, and have access to humanly-understandable explanations of how they came about
commit to transparency and responsible disclosure ... To this end, ... provide meaningful information ... to enable those affected by an AI system to understand the outcome (3(iii))

7. Embed Quality Assurance

7.1 Ensure effective, efficient and adaptive performance of intended functions
AI systems should be robust ... throughout their entire lifecycle ... so that ... they function appropriately (4(a))

7.2 Ensure data quality and data relevance

7.3 Justify the use of data, commensurate with each data-item's sensitivity

7.4 Ensure security safeguards against inappropriate data access, modification and deletion, commensurate with its sensitivity
AI systems should be ... secure ... throughout their entire lifecycle ... so that ... they function appropriately (4(a))
apply a systematic risk management approach ... on a continuous basis to address risks ... including ... digital security (4(c))

7.5 Deal fairly with people ('faithfulness', 'fidelity')

7.6 Ensure that inferences are not drawn from data using invalid or unvalidated techniques

7.7 Test result validity, and address the problems that are detected

7.8 Impose controls in order to ensure that the safeguards are in place and effective

7.9 Conduct audits of safeguards and controls

8. Exhibit Robustness and Resilience

8.1 Deliver and sustain appropriate security safeguards against the risk of compromise of intended functions arising from both passive threats and active attacks, commensurate with the significance of the benefits and the potential to cause harm
AI systems should be robust ... throughout their entire lifecycle ... so that ... they function appropriately (4(a))

8.2 Deliver and sustain appropriate security safeguards against the risk of inappropriate data access, modification and deletion, arising from both passive threats and active attacks, commensurate with the data's sensitivity
AI systems should be ... secure ... throughout their entire lifecycle ... so that ... they function appropriately (4(a))
apply a systematic risk management approach ... on a continuous basis to address risks ... including ... digital security (4(c))

8.3 Conduct audits of the justification, the proportionality, the transparency, and the harm avoidance, prevention and mitigation measures and controls

8.4 Ensure resilience, in the sense of prompt and effective recovery from incidents

9. Ensure Accountability for Obligations

9.1 Ensure that the responsible entity is apparent or can be readily discovered by any party

9.2 Ensure that effective remedies exist, in the form of complaints processes, appeals processes, and redress where harmful errors have occurred
provide meaningful information ... to enable those adversely affected by an AI system to challenge its outcome (3(iv))
ensure traceability ... to enable analysis of the AI system's outcomes and responses to inquiry (4(b))
AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles ... (5)

10. Enforce, and Accept Enforcement of, Liabilities and Sanctions

10.1 Ensure that complaints, appeals and redress processes operate effectively

10.2 Comply with external complaints, appeals and redress processes and outcomes, including, in particular, provision of timely, accurate and complete information relevant to cases


Appendix 3: Weak Expressions Among the OECD Principles

Some of the OECD principles fall short of reasonable expectations. In each case below, the extract from the 50 Principles is followed by the relevant extract from the OECD document in italics, with key segments in bold italics, and a brief explanation

1.5 Publish sufficient information to stakeholders to enable them to conduct their own assessments
commit to
transparency and responsible disclosure ... To this end, ... provide meaningful information ... to foster a general understanding of AI systems ... consistent with the state of art (3(i))

(1) The words 'commit to' fall short of a requirement to deliver on the commitment

(2) Organisations are absolved of the responsibility if the technology is inadequate

3.5 Ensure human review of inferences and decisions prior to action being taken
implement mechanisms and safeguards, such as capacity for human determination ... consistent with the state of art (2(b))

(1) There is no obligation to ensure human review, because the requirement is only for the capacity for review to be performed, not for the actual performance of human review

(2) Its expression only as an example arguably means that it is a suggestion, not a requirement

(3) Organisations are absolved of the responsibility if the technology is inadequate

4.4 Implement safeguards to avoid, prevent and mitigate negative impacts and implications

implement mechanisms and safeguards ... that are ... consistent with the state of art (2(b))

(1) It is far from clear that avoidance, prevention and mitigation are all encompassed by the wording used

(2) Organisations are absolved of the responsibility if the technology is inadequate

7.4 Ensure security safeguards against inappropriate data access, modification and deletion, commensurate with its sensitivity
AI systems should be ... secure ... throughout their entire lifecycle ... so that ... they function appropriately (4(a))
apply a systematic risk management approach ... on a continuous basis to address risks ... including ... digital security (4(c))

It is not clear that the expression encompasses all aspects of data security

9.2 Ensure that effective remedies exist, in the form of complaints processes, appeals processes, and redress where harmful errors have occurred
provide meaningful information ... to enable those adversely affected by an AI system to challenge its outcome (3(iv))
ensure traceability ... to enable analysis of the AI system's outcomes and responses to inquiry (4(b))
AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles ... (5)

It is far from clear that the expressions encompass all aspects of an effective process to remedy errors arising from the application of AI, particularly given the absence of any aspects of the vital, complementary Principles 9.1, 10.1 and 10.2


Appendix 4: Key Instance of Principles Missing From OECD (2019)

Of the 50 Principles, 30 are entirely missing from the OECD document.
This Appendix lists a dozen of the most serious omissions


References

Clarke R. (2019) 'Principles and Business Processes for Responsible AI' Forthcoming, Computer Law & Security Review, PrePrint at http://www.rogerclarke.com/EC/AIP.html#App1

EC (2019) 'Ethics Guidelines for Trustworthy AI' High-Level Expert Group on Artificial Intelligence, European Commission, April 2019, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=58477

OECD (2019)  'Recommendation of the Council on Artificial Intelligence' Organisation for Economic Co-operation and Development, 22 May 2019, at https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 60 million in early 2019.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 25 May 2019 - Last Amended: 26 May 2019 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/AI-OECD-Eval.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2017   -    Privacy Policy