Roger Clarke's Web-Site© Xamax Consultancy Pty Ltd, 1995-2024 |
||||||
HOME | eBusiness |
Information Infrastructure |
Dataveillance & Privacy |
Identity Matters | Other Topics | |
What's New |
Waltzing Matilda | Advanced Site-Search |
Substantive Version of 12 November 2019
© Xamax Consultancy Pty Ltd, 2019
Available under an AEShareNet licence or a Creative Commons licence.
This document is at http://www.rogerclarke.com/EC/AI-Aust19.html
During the period 2015-2020, Artificial Intelligence (AI) has been promoted with particular vigour. The nature of the technologies, combined with the sometimes quite wild enthusiasm of the technologies' proponents, have given rise to a considerable amount of public concern about AI's negative impacts. In an endeavour to calm those concerns, many organisations have published lists of 'principles for responsible AI'.
In Clarke (2019d), a large set of such publications was analysed, resulting in a consolidated set of 10 Themes and 50 Principles. When compared against that consolidation, almost all documents to date are shown to have very limited scope.
As at mid-2019, the sole contender that scored respectably was EC (2019), and even that document only scored 37/50 (74%).
In April 2019, the Australian Department of Industry, Innovation & Science (DI) published a discussion document entitled 'Artificial Intelligence: Australia's Ethics Framework' (Dawson et al. 2019). My submission, at Clarke (2019b), was one of a number of the submissions that were highly critical of the many aspects of the document, including the 'core principles' for AI that it contained.
In November 2019, it became apparent from a media report (Hendry 2019) that the Department had published a revised document. It carries the date 2 September 2019, but those who had made submissions on the matter were not aware of it until the Hendry article was published on 7 November.
The revised document is much shorter than the original report, having shed the discussion and the poor-quality commentaries on ethics and on regulation (DI 2019). It contains a set of 'AI Ethics Principles'. Those in the April discussion document (pp. 6, 57) contained 235 words. Their replacement comprises a significantly-reworked, 210-word set of 'Principles at a glance', followed by 1320 words of 'Principles in detail', reproduced in Appendix 1 below.
DI's document says that the 'AI Ethics Principles' "can inform the design, development and implementation of artificial intelligence systems". The audience addressed is declared merely as "organisations" - although presumably both in business and government. However, "The principles are voluntary. They are aspirational and intended to complement - not substitute - existing AI related regulations".
This document assesses whether DI's 'AI Ethics Principles' measure up to the need.
This document reports on an assessment of the DI's document against the consolidated set of 50 Principles. The DI's principles were examined, and elements were noted whose expression has similar meaning to those in the set of 50.
The information arising from the examination is as follows:
The raw result for the DI's Principles was very low - only 13 of the 50 Principles are adequately addressed (26%). An additional 19 are partly or weakly addressed, and 18 are not addressed at all. So the overall assessment is a 'Bad Fail', at c. 40%.
This is about the same level of the highly inadequate OECD Guidelines (OECD 2019), which were evaluated in (Clarke 2019c). DI's document mentions the OECD, but fails to cite the sole authoritative document published to date that is of real substance, which appeared in the same month as DI's discussion paper - EC (2019). Using the same scoring scheme, the European document achieved 74% (Clarke 2019a).
The aim is expressly stimulatory, aimed at "Building Australia's artificial intelligence capability" and "ensuring public trust in AI". But DI has failed to appreciate that the key to achieving trust is to ensure trustworthiness of the technologies and of organisations' uses of the technologies. That requires a comprehensive set of principles of real substance; articulation of them for each stage of the supply chain; educational processes; means of encouraging their application and discouraging behaviour in breach of the principles; a credible regulatory framework; and the enforcement of at least baseline standards.
DI's document is seriously harmful to the Australian public, because it enables dangerous forms and applications of AI to claim the protection of government-published guidelines.
DI's document is also seriously harmful to Australian industry, in two ways:
Annotated extract from DI (2019)
Segments of text that correspond to elements of the 50 Principles are [[in italics and enclosed in double square brackets]], followed by cross-references to the relevant Principle (in brackets)
You can use our 8 principles when designing, developing, integrating or using artificial intelligence (AI) systems to:
The principles are voluntary. They are aspirational and intended to complement-not substitute-existing AI related regulations. Read how and when you can apply them.
Throughout their lifecycle, AI systems should benefit individuals, society and the environment.
This principle aims to clearly indicate from the outset that [[AI systems should be used for beneficial outcomes for individuals, society and the environment]] (4.3). [[AI system objectives should be clearly identified and justified]] (1.2). AI systems that help address areas of global concern should be encouraged, like the United Nation's Sustainable Development Goals. Ideally, AI systems should be used to benefit all human beings, including future generations.
AI systems designed for legitimate internal business purposes, like increasing efficiency, can have broader impacts on individual, social and environmental wellbeing. Those [[impacts, both positive and negative, should be accounted for throughout the AI systems lifecycle, including impacts outside the organisation]] (1.4, but weak and partial).
Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals.
This principle aims to ensure that AI systems are aligned with human values. [[Machines should serve humans, and not the other way around]] (2.1). AI systems should enable an equitable and democratic society by [[respecting, protecting and promoting human rights, enabling diversity, respecting human freedom]] (5.2) and [[the autonomy of individuals]] (3.4), and protecting the environment.
Human rights risks need to be carefully considered, as AI systems can equally enable and hamper such fundamental rights. [[It's permissible to interfere with certain human rights where it's reasonable, necessary and proportionate]] (cf. 5.6, but facilitative, not protective).
[[All people interacting with AI systems should be able to keep full and effective control over themselves]] (3.4). [[AI systems]] should not undermine the democratic process, and [[should not undertake actions that threaten individual autonomy]] (3.4), like [[deception]] (3.6), [[unfair manipulation]] (4.6), [[unjustified surveillance]] (5.4, but partial), and [[failing to maintain alignment between a disclosed purpose and true action]] (7.1, but partial).
[[AI systems should be designed to augment, complement and empower human cognitive, social and cultural skills]] (2.1). Organisations designing, developing, deploying or operating AI systems should ideally [[hire staff from diverse backgrounds, cultures and disciplines]] to ensure a wide range of perspectives, and [[to minimise the risk of missing important considerations only noticeable by some stakeholders]] (1.7, but partial).
Throughout their lifecycle, [[AI systems]] should be inclusive and accessible, and [[should not involve or result in unfair discrimination against individuals, communities or groups]] (5.1).
This principle aims to ensure that AI systems are fair and that they enable inclusion throughout their entire lifecycle. AI systems should be user-centric and designed in a way that allows all people interacting with it to access the related products or services. This includes both [[appropriate consultation with stakeholders, who may be affected by the AI system throughout its lifecycle]] (1.6, but partial), and [[ensuring people receive equitable access and treatment]] (5.1).
This is particularly important given concerns about the potential for AI to perpetuate societal injustices and have a disparate impact on vulnerable and underrepresented groups including, but not limited to, groups relating to age, disability, race, sex, intersex status, gender identity and sexual orientation. Measures should be taken to [[ensure the AI produced decisions are compliant with anti_discrimination laws]] (5.1).
Throughout their lifecycle, [[AI systems should respect and uphold privacy rights and data protection]] (3.3), and [[ensure the security of data]] (7.4).
This principle aims to ensure respect for privacy and data protection when using AI systems. This includes ensuring proper data governance, and management, for all data used and generated by the AI system throughout its lifecycle. For example, maintaining privacy through appropriate data anonymisation where used by AI systems. Further, [[the connection between data, and inferences drawn from that data by AI systems, should be sound]] (7.7) and [[assessed in an ongoing manner]] (7.9).
This principle also aims to [[ensure appropriate]] [[data]] and [[AI system security measures are in place]] (4.4, but partial, 8.1, but partial, 8.2). This includes the identification of potential security vulnerabilities, and [[assurance of resilience to [cf. robustness against] adversarial attacks]] (8.1, but partial). [[Security measures]] should account for unintended applications of AI systems, and potential abuse risks, with appropriate [[mitigation measures]] (8.1, but partial).
Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.
This principle aims to ensure that AI systems reliably operate in accordance with their intended purpose throughout their lifecycle. This includes ensuring AI systems are reliable, accurate and reproducible as appropriate.
[[AI systems should not pose unreasonable safety risks]] (4.1, but partial), or [[adopt safety measures that are proportionate to the magnitude of potential risks]] (4.4, but partial, 8.1, but partial). [[AI systems should be monitored and tested to ensure they continue to meet their intended purpose, and any identified problems should be addressed]] (7.1) with ongoing risk management as appropriate. [[Responsibility should be clearly and appropriately identified, for ensuring that an AI system is robust and safe]] (9.1, but partial).
There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.
[It is unclear what the term, 'responsible disclosure' means, cf. 'disclosure', and how that differs from, adds to, or clarifies 'transparency'.]
This principle aims to ensure responsible disclosure when an AI system is significantly impacting on a person's life. The definition of the threshold for `significant impact' will depend on the context, impact and application of the AI system in question.
Achieving transparency in AI systems through responsible disclosure is important to each stakeholder group for the following reasons (Content based on the IEEE's Ethically Aligned Design Report)
[But the grammar is inconsistent, the meaning is unclear, and no obligations are defined]
Responsible [[disclosures should]] be provided in a timely manner, and [[provide reasonable justifications for AI systems outcomes]] (1.8, 6.3, but only partial). This [[includes information that helps people understand outcomes, like key factors used in decision making]] (6.3).
This principle also aims to [[ensure people have the ability to find out when an AI system is engaging with them]] (6.1) (regardless of the level of impact), and be [are?] able to obtain a reasonable disclosure regarding the AI system.
When an AI system significantly impacts a person, community, group or environment, [[there should be a timely process to allow people to challenge the use or output of the AI system]] (9.2, but very partial).
This principle aims to ensure the provision of efficient, accessible mechanisms that allow people to challenge the use or output of an AI system, when that AI system significantly impacts a person, community, group or environment. The definition of the threshold for `significant impact' will depend on the context, impact and application of the AI system in question.
Knowing that redress for harm is possible, when things go wrong, is key to ensuring public trust in AI. [But nothing in the text requires that redress be available.] Particular attention should be paid to vulnerable persons or groups.
[[There should be sufficient access to the information available to the algorithm, and inferences drawn, to make contestability effective]] (6.3, but partial). [[In the case of decisions significantly affecting rights, there should be an effective system of oversight, which makes appropriate use of human judgment]] (3.5, but weak).
Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
This principle aims to acknowledge the relevant organisations and individuals' responsibility for the outcomes of the AI systems that they design, develop, deploy and operate. The application of legal principles regarding accountability for AI systems is still developing.
Mechanisms should be put in place to [[ensure responsibility and accountability for AI systems and their outcomes. This includes both before and after their design, development, deployment and operation]] (1.4, but partial, 9.1, but partial). [[The organisation and individual accountable for the decision should be identifiable as necessary]] (9.1, but partial). [[They must consider the appropriate level of human control or oversight for the particular AI system or use case]] (3.1, 3.2, but weak).
[[AI systems that have a significant impact on an individuals' rights should be accountable to external review, this includes providing timely, accurate, and complete information for the purposes of independent oversight bodies]] (10.2, but partial).
Annotated extract from
Clarke
(2019)
See
here
for a PDF version of this Appendix
Principles that are evident in the DI document (even if only partly covered, or weakly expressed) are in bold-face type, with the relevant segments of text from the DI document shown in italics, followed by cross-references to the locations in which that text occurs
The following Principles apply to each entity responsible for each phase of AI research, invention, innovation, dissemination and application.
1.1 Conceive and design only after ensuring adequate understanding of purposes and contexts
1.2 Justify objectives
AI system objectives should be
clearly identified and justified (1)
1.3 Demonstrate the achievability of postulated benefits
1.4 Conduct impact assessment, including risk assessment from
all stakeholders' perspectives
impacts, both positive and negative,
should be accounted for throughout the AI systems lifecycle, including impacts
outside the organisation (1) - but only weak and partial
ensure
responsibility and accountability for AI systems and their outcomes. This
includes ... before ... their design, development, deployment and operation (8,
but partial, in that it fails to require that a process be undertaken)
1.5 Publish sufficient information to stakeholders to enable them to conduct their own assessments
1.6 Conduct consultation with stakeholders and enable their
participation in design
appropriate consultation with stakeholders, who
may be affected by the AI system throughout its lifecycle (3, but partial,
because of '"appropriate" and no requirement to reflect their views)
1.7 Reflect stakeholders' justified concerns in the
design
hire staff from diverse backgrounds, cultures and disciplines ...
to minimise the risk of missing important considerations only noticeable by
some stakeholders (2, but partial)
1.8 Justify negative impacts on individuals
('proportionality')
disclosures should ... provide reasonable
justifications for AI systems outcomes (6, but only partial, in that it appears
only as a matter of transparency, not design)
1.9 Consider alternative, less harmful ways of achieving the same objectives
2.1 Design as an aid, for augmentation, collaboration and
inter-operability
Machines should serve humans, and not the
other way around (2)
AI systems should be designed to augment, complement
and empower human cognitive, social and cultural skills (2)
2.2 Avoid design for replacement of people by independent artefacts or systems, except in circumstances in which those artefacts or systems are demonstrably more capable than people, and even then ensuring that the result is complementary to human capabilities
3.1 Ensure human control over AI-based technology,
artefacts and systems
[The organisation and individual
accountable for the decision] must consider the appropriate level of human
control or oversight for the particular AI system or use case (8, but weak,
because of the qualifier "appropriate", and the ambiguities arising from the
confused language)
3.2 In particular, ensure human control over autonomous behaviour of
AI-based technology, artefacts and
systems
[The organisation and individual accountable
for the decision] must consider the appropriate level of human control or
oversight for the particular AI system or use case (8, but weak, because of the
qualifier "appropriate", the ambiguities arising from the confused language,
and the failure to recognise the significance of autonomous AI-based
systems)
3.3 Respect people's expectations in relation to personal data
protections, including:
* their awareness of data-usage
*
their consent
* data minimisation
* public visibility and design
consultation and participation
* the relationship between data-usage and
the data's original purpose
AI systems should ... respect and uphold
privacy rights and data protection (4, but apparently limited to Australia's
desperately weak data protection laws, and not extending to public
expectations)
3.4 Respect each person's autonomy, freedom of choice and
right to self-determination
respecting, protecting and promoting ...
the autonomy of individuals (2)
All people interacting with AI systems
should be able to keep full and effective control over themselves (2)
AI
systems ... should not undertake actions that threaten individual autonomy
(2)
3.5 Ensure human review of inferences and decisions prior to
action being taken
In the case of decisions significantly affecting
rights, there should be an effective system of oversight, which makes
appropriate use of human judgment (3.5, but weak, due to the qualifiers
"significantly" and "approprioate", and the limitation to abstract, supervisory
"oversight" rather than direct operational involvement prior to action being
taken)
3.6 Avoid deception of humans
AI systems ... should
not undertake ... deception (2)
3.7 Avoid services being conditional on the acceptance of AI-based artefacts and systems
4.1 Ensure people's physical health and
safety ('nonmaleficence')
AI systems should not pose
unreasonable safety risks]] (5, but partial)
4.2 Ensure people's psychological safety, by avoiding negative effects on their mental health, emotional state, inclusion in society, worth, and standing in comparison with other people
4.3 Contribute to people's wellbeing ('beneficence')
AI
systems should be used for beneficial outcomes for individuals, society and the
environment (1)
4.4 Implement safeguards to avoid, prevent and mitigate negative
impacts and implications
ensure appropriate ... AI system
security measures are in place [including] assurance of [robustness against]
adversarial attacks (4, but partial in that only active attacks are
mentioned)
AI systems should ... adopt safety measures that are
proportionate to the magnitude of potential risks (5)
4.5 Avoid violation of trust
4.6 Avoid the manipulation of vulnerable
people, e.g. by taking advantage of individuals' tendencies to
addictions such as gambling, and to letting pleasure overrule
rationality
AI systems ... should not undertake ... unfair manipulation
(2)
5.1 Be just / fair / impartial, treat individuals
equally, and avoid unfair discrimination and bias, not only where they
are illegal, but also where they are materially inconsistent with public
expectations
AI systems ... should not involve or result in unfair
discrimination against individuals, communities or groups (3)
ensuring
people receive equitable access and treatment (3)
ensure the AI produced
decisions are compliant with anti_discrimination laws (3)
5.2 Ensure compliance with human rights laws
respecting, protecting and promoting human rights, enabling
diversity, respecting human freedom (2)
5.3 Avoid restrictions on, and promote, people's freedom of movement
5.4 Avoid interference with, and promote
privacy, family, home or reputation
AI systems ...
should not undertake ... unjustified surveillance (2, but only partial)
5.5 Avoid interference with, and promote, the rights of freedom of information, opinion and expression, of freedom of assembly, of freedom of association, of freedom to participate in public affairs, and of freedom to access public services
5.6 Where interference with human values or human rights is
outweighed by other factors, ensure that the interference is no greater than is
justified ('harm minimisation')
It's permissible to interfere with
certain human rights where it's reasonable, necessary and proportionate (2 -
facilitative, not protective)
6.1 Ensure that the fact that a process is AI-based is transparent
to all stakeholders
ensure people have the ability to find out
when an AI system is engaging with them (6, although qualified as being an
"aim" rather than a requirement)
6.2 Ensure that data provenance, and the means whereby inferences are drawn from it, decisions are made, and actions are taken, are logged and can be reconstructed
6.3 Ensure that people are aware of inferences, decisions and
actions that affect them, and have access to humanly-understandable
explanations of how they came about
disclosures should ...
provide reasonable justifications for AI systems outcomes (6, but only partial,
in that it relates only to justification, not to the rationale)
disclosures
should ... [include] information that helps people understand outcomes, like
key factors used in decision making]] (6, in that it relates only to input
data, not to the rationale)
There should be sufficient access to the
information available to the algorithm, and inferences drawn, to make
contestability effective (7, but partial, in that it relates only to data in,
and inference out, but not to the rationale)
7.1 Ensure effective, efficient and
adaptive performance of intended functions
AI
systems ... should not undertake actions like ... failing to maintain alignment
between a disclosed purpose and true action (2, but far from
comprehensive)
AI systems should be monitored and tested to ensure they
continue to meet their intended purpose, and any identified problems should be
addressed (5)
7.2 Ensure data quality and data relevance
7.3 Justify the use of data, commensurate with each data-item's sensitivity
7.4 Ensure security safeguards against inappropriate data
access, modification and deletion, commensurate with its sensitivity
AI
systems should ... ensure the security of data (4)
7.5 Deal fairly with people ('faithfulness', 'fidelity')
7.6 Ensure that inferences are not drawn from data using invalid or unvalidated techniques
7.7 Test result validity, and address the problems that are
detected
the connection between data, and inferences drawn from
that data by AI systems, should be sound (4, although the comprehensiveness is
in doubt)
7.8 Impose controls in order to ensure that the safeguards are in place and effective
7.9 Conduct audits of safeguards and controls
the
connection between data, and inferences drawn from that data by AI systems,
should be ... assessed in an ongoing manner (4, but very partial)
8.1 Deliver and sustain appropriate security safeguards against the
risk of compromise of intended functions arising from both passive
threats and active attacks, commensurate with the significance
of the benefits and the potential to cause harm
ensure appropriate ...
AI system security measures are in place [including] assurance of [robustness
against] adversarial attacks (4, but partial in that only active attacks are
mentioned)
Security measures ... mitigation measures (4, but partial, in
that the statement encompasses only some of the risks)
AI systems should ...
adopt safety measures that are proportionate to the magnitude of potential
risks (5, but partial, in that it relates to safety only)
8.2 Deliver and sustain appropriate security safeguards against the
risk of inappropriate data access, modification and deletion, arising from both
passive threats and active attacks, commensurate with the data's
sensitivity
ensure appropriate data ... security measures are
in place (4)
8.3 Conduct audits of the justification, the proportionality, the transparency, and the harm avoidance, prevention and mitigation measures and controls
8.4 Ensure resilience, in the sense of prompt and effective recovery from incidents
9.1 Ensure that the responsible entity is apparent or can be readily
discovered by any party
Responsibility should be clearly and
appropriately identified, for ensuring that an AI system is robust and safe (5,
but partial in that it relates only to robustness and safety)
ensure
responsibility and accountability for AI systems and their outcomes. This
includes ... after their design, development, deployment and operation (8, but
partial, in that discoverability of the responsible entity has to be inferred
rather than being a requirement)
The organisation and individual accountable
for the decision should be identifiable as necessary (8, but partial, in that
it appears to relate only to individual decisions, not AI systems)
9.2 Ensure that effective remedies exist, in the form of complaints
processes, appeals processes, and redress where harmful errors have
occurred
there should be a timely process to allow people to challenge
the use or output of the AI system (7, but very partial, in that it encompasses
only 'challenge'/'complaint')
10.1 Ensure that complaints, appeals and redress processes operate effectively
10.2 Comply with external complaints, appeals
and redress processes and outcomes, including, in
particular, provision of timely, accurate and complete
information relevant to cases
AI systems that have a
significant impact on an individuals' rights should be accountable to external
review, this includes providing timely, accurate, and complete information for
the purposes of independent oversight bodies (8, but partial, in that it
encompasses only "review", without mention of appeals, and redress, nor in any
direct way with outcomes)
Extracts of relevant DI text are shown below in normal style.
Qualifiers are
underlined. Comments are [in square brackets].
The principles are voluntary. They are aspirational and intended to complement-not substitute-existing AI related regulations
[The majority of the exhortations are expressed as 'should' or 'should not', even where the action is mandated or prohibited by law.]
AI systems that help address areas of global concern should be encouraged
Ideally, AI systems should be used to benefit all human beings, including future generations.
It's permissible to interfere with certain human rights where it's reasonable, necessary and proportionate. [This is facilitative, not protective]
Organisations designing, developing, deploying or operating AI systems should ideally hire staff from diverse backgrounds, cultures and disciplines ...
This includes ... appropriate consultation with stakeholders ...
...
AI systems should not pose unreasonable safety risks [with no indication of what and/or whose safety is prioritised, and no indication of the processes and the standards to be applied]
There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system [with no indication of the meaning of the key terms, e.g. cf. 'disclosure', and how 'responsible disclosure' differs from, adds to or clarifies 'transparency' and no useful guidance in relation to assessment of significance]
Responsible disclosures should ... provide reasonable justifications for AI systems outcomes [without any indication of how to assess reasonableness. Or perhaps the word intended was 'reasoned'?]
This principle also aims to ensure people have the ability to find out when an AI system is engaging with them ... and [are] able to obtain a reasonable disclosure regarding the AI system [expressed as an aim, not a requirement, and no guidance is provided in relation to 'reasonable'. Or perhaps the word intended was 'responsible'?]
When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system [without any meaningful guidance on how to gauge 'significance']
Knowing that redress for harm is possible, when things go wrong, is key to ensuring public trust in AI. [But nothing in the text requires redress to be available.]
There should be sufficient access to the information available to [about?] the algorithm, and inferences drawn, to make contestability effective [without guidance concerning the meaning of the key word]
In the case of decisions significantly affecting rights, there should be an effective system of oversight, which makes appropriate use of human judgment [without guidance concerning the meanings of the key words]
The organisation and individual accountable for the decision should be identifiable as necessary [without guidance concerning the meaning of the key term]
AI systems that have a significant impact on an individuals' rights should be accountable to external review [without guidance concerning the meaning of the key word]
Of the 50 Principles, the DI document addresses 13, misses 19, and is inadequate re 18.
1.1 Conceive and design only after ensuring adequate understanding of purposes and contexts
1.3 Demonstrate the achievability of postulated benefits
1.5 Publish sufficient information to stakeholders to enable them to conduct their own assessments
1.9 Consider alternative, less harmful ways of achieving the same objectives
2.2 Avoid design for replacement of people by independent artefacts or systems, except in circumstances in which those artefacts or systems are demonstrably more capable than people, and even then ensuring that the result is complementary to human capabilities
3.7 Avoid services being conditional on the acceptance of AI-based artefacts and systems
4.2 Ensure people's psychological safety, by avoiding negative effects on their mental health, emotional state, inclusion in society, worth, and standing in comparison with other people
4.5 Avoid violation of trust
5.3 Avoid restrictions on, and promote, people's freedom of movement
5.5 Avoid interference with, and promote, the rights of freedom of information, opinion and expression, of freedom of assembly, of freedom of association, of freedom to participate in public affairs, and of freedom to access public services
6.2 Ensure that data provenance, and the means whereby inferences are drawn from it, decisions are made, and actions are taken, are logged and can be reconstructed
7.2 Ensure data quality and data relevance
7.3 Justify the use of data, commensurate with each data-item's sensitivity
7.5 Deal fairly with people ('faithfulness', 'fidelity')
7.6 Ensure that inferences are not drawn from data using invalid or unvalidated techniques
7.8 Impose controls in order to ensure that the safeguards are in place and effective
8.3 Conduct audits of the justification, the proportionality, the transparency, and the harm avoidance, prevention and mitigation measures and controls
8.4 Ensure resilience, in the sense of prompt and effective recovery from incidents
10.1 Ensure that complaints, appeals and redress processes operate effectively
1.4 Conduct impact assessment, including risk assessment from all stakeholders' perspectives
1.6 Conduct consultation with stakeholders and enable their participation in design
1.7 Reflect stakeholders' justified concerns in the design
1.8 Justify negative impacts on individuals ('proportionality')
3.1 Ensure human control over AI-based technology, artefacts and systems
3.2 In particular, ensure human control over autonomous behaviour of AI-based technology, artefacts and systems
3.3 Respect people's expectations in relation to personal data protection ...
3.4 Respect each person's autonomy, freedom of choice and right to self-determination
3.5 Ensure human review of inferences and decisions prior to action being taken
4.1 Ensure people's physical health and safety ('nonmaleficence')
5.4 Avoid interference with, and promote privacy, family, home or reputation
5.6 Where interference with human values or human rights is outweighed by other factors, ensure that the interference is no greater than is justified ('harm minimisation')
6.3 Ensure that people are aware of inferences, decisions and actions that affect them, and have access to humanly-understandable explanations of how they came about
7.9 Conduct audits of safeguards and controls
8.1 Deliver and sustain appropriate security safeguards against the risk of compromise of intended functions arising from both passive threats and active attacks, commensurate with the significance of the benefits and the potential to cause harm
9.1 Ensure that the responsible entity is apparent or can be readily
discovered by any party
9.2 Ensure that effective remedies
exist, in the form of complaints processes, appeals processes,
and redress where harmful errors have occurred
10.2 Comply with external complaints, appeals and redress processes and outcomes, including, in particular, provision of timely, accurate and complete information relevant to cases
Clarke R. (2019a) ' 'Principles for AI: A SourceBook' Xamax Consultancy Pty Ltd, April 2019, at http://www.rogerclarke.com/EC/GAIP.html
Clarke R. (2019b) 'Submission to Department of Industry, Innovation & Science re 'Artificial Intelligence: Australia's Ethics Framework' Xamax Consultancy Pty Ltd, May 2019, at http://www.rogerclarke.com/EC/AAISub-1905.html
Clarke R. (2019c) 'The OECD's AI Guidelines of 22 May 2019: Evaluation against a Consolidated Set of 50 Principles' Xamax Consultancy Pty Ltd, May 2019, at http://www.rogerclarke.com/EC/AI-OECD-Eval.html
Clarke R. (2019d) 'Principles and Business Processes for Responsible AI' Computer Law & Security Review 35, 4 (2019) 410-422, PrePrint at http://www.rogerclarke.com/EC/AIP.html#App1
Dawson D. and Schleiger E., Horton J., McLaughlin J., Robinson C., Quezada G., Scowcroft J., and Hajkowicz S. (2019) 'Artificial Intelligence: Australia's Ethics Framework' Data61 / CSIRO, April 2019, at https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/supporting_documents/ArtificialIntelligenceethicsframeworkdiscussionpaper.pdf
DI (2019) 'AI Ethics Principles' Department of Industry, Innovation & Science, 2 September 2019, at https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/ai-ethics-principles
Hendry J. (2019) ' CBA, NAB and Telstra to test govt's new AI ethics doctrine' itNews, 7 November 2019, at https://www.itnews.com.au/news/cba-nab-and-telstra-to-test-govts-new-ai-ethics-doctrine-533580
EC (2019) 'Ethics Guidelines for Trustworthy AI' High-Level Expert Group on Artificial Intelligence, European Commission, April 2019, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=58477
OECD (2019) 'Recommendation of the Council on Artificial Intelligence' Organisation for Economic Co-operation and Development, 22 May 2019, at https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.
Personalia |
Photographs Presentations Videos |
Access Statistics |
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax. From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 75 million in late 2024. Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer |
Xamax Consultancy Pty Ltd ACN: 002 360 456 78 Sidaway St, Chapman ACT 2611 AUSTRALIA Tel: +61 2 6288 6916 |
Created: 11 November 2019 - Last Amended: 12 November 2019 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/AI-Aust19.html
Mail to Webmaster - © Xamax Consultancy Pty Ltd, 1995-2024 - Privacy Policy