Roger Clarke's Web-Site© Xamax Consultancy Pty Ltd, 1995-2024 |
||||||
HOME | eBusiness |
Information Infrastructure |
Dataveillance & Privacy |
Identity Matters | Other Topics | |
What's New |
Waltzing Matilda | Advanced Site-Search |
Revision of 25 September 2018
Prepared in support of Guidelines for the Responsible Business Use of AI
© Xamax Consultancy Pty Ltd, 2018
Available under an AEShareNet licence or a Creative Commons licence.
This document is at http://www.rogerclarke.com/EC/GAIE.html
This page provides citations to and excerpts from 8 expressions of general ethical principles applied to technology-rich contexts contexts - including bio-medicine, surveillance, robotics and information technology.
This document is complemented by a collection of 22 further documents that express principles specifically relevant to Artificial Intelligence (AI). Together, the collection of 30 sets of principles provides a basis for a consolidated super-set of 50 Principles for Responsible AI, published in Clarke (2019). Each of the 8 documents in this set is given a score showing how many of the 50 Principles are at least modestly reflected in the document.
The individual has the right to act as a free agent. That is, human beings are free to decide how they live their lives as long as their decisions do not negatively impact the lives of others. Human beings also have the right to exercise freedom of thought or choice.
Our interactions with people (within the helping professions or otherwise) should not harm others. We should not engage in any activities that run the risk of harming others.
Our actions should actively promote the health and well-being of others.
In the broadest sense of the word, this means being fair. This is especially the case when the rights of one individual or group are balanced against another. Being just, however, assumes three standards. They are impartiality, equality, and reciprocity (based on the golden rule: treat others as you wish to be treated).
Being faithful involves loyalty, truthfulness, promise keeping, and respect. This principle is related to the treatment of autonomous people. Failure to remain faithful in dealing with others denies individuals the full opportunity to exercise free choice in a relationship, therefore limiting their autonomy.
1. Harm: does the technique cause unwarranted physical or psychological harm?
2. Boundary: does the technique cross a personal boundary without permission (whether involving coercion or deception or a body, relational or spatial border)?
3. Trust: does the technique violate assumptions that are made about how personal information will be treated such as no secret recordings?
4. Personal relationships: is the tactic applied in a personal or impersonal setting?
5. Invalidity: does the technique produce invalid results?
6. Awareness: are individuals aware that personal information is being collected, who seeks it and why?
7. Consent: do individuals consent to the data collection?
8. Golden rule: would those responsbile for the surveillance (both the decision to apply it and its actual application) agree to be its subjects under the conditions in which they apply it to others?
9. Minimization: does a principle of minimization apply?
10. Public decision-making: was the decision to use a tactic arrived at through some public discussion and decision making process?
11. Human review: is there human review of machine generated results?
12. Right of inspection: are people aware of the findings and how they were created?
13. Right to challenge and express a grievance: are there procedures for challenging the results, or for entering alternative data or interpretations into the record?
14. Redress and sanctions: if the individual has been treated unfairly and procedures violated, are there appropriate means of redress? Are there means for discovering violations and penalties to encourage responsible surveillant behavior?
15. Adequate data stewardship and protection: can the security of the data be adequately protected?
16. Equality-inequality regarding availability and application: a) is the means widely available or restricted to only the most wealthy, powerful or technologically sophisticated? b) within a setting is the tactic broadly applied to all people or only to those less powerful or unable to resist c) if there are means of resisting the provision of personal information are these equally available, or restricted to the most privileged?
17. The symbolic meaning of a method: what does the use of a method communicate more generally?
18. The creation of unwanted precedents: is it likely to create precedents that will lead to its application in undesirable ways?
19. Negative effects on surveillors and third parties: are there negative effects on those beyond the subject?
20. Beneficiary: does application of the tactic serve broad community goals, the goals of the object of surveillance or the personal goals of the data collector?
21. Proportionality: is there an appropriate balance between the importance of the goal and the cost of the means?
22. Alternative means: are other less costly means available?
23. Consequences of inaction: where the means are very costly, what are the consequences of taking no surveillance action?
24. Protections: are adequate steps taken to minimize costs and risk?
25. Appropriate vs. inappropriate goals: are the goals of the data collection legitimate?
26. The goodness of fit between the means and the goal: is there a clear link between the information collected and the goal sought?
27. Information used for original vs. other unrelated purposes: is the personal information used for the reasons offered for its collection and for which consent may have been given and does the data stay with the original collector, or does it migrate elsewhere?
28. Failure to share secondary gains from the information: is the personal data collected used for profit without permission from, or benefit to, the person who provided it?
29. Unfair disadvantage: is the information used in such a way as to cause unwarranted harm or disadvantage to its subject?
These were nominally for "regulating robots in the real world", but are qualified so heavily that they fall far short of representing a practical resource.
1. The Principle of Killing
Robots are multi-use tools.
Robots should not be designed solely or primarily to kill or harm humans,
except in the interests of national security
2. The Principle of Compliance
Humans, not robots, are
responsible agents. Robots should be designed; operated as far as is
practicable to comply with existing laws, fundamental rights & freedoms,
including privacy
3. The Principle of Commoditisation
Robots are products.
They should be designed using processes which assure their safety and security
4. The Principle of Transparency
Robots are manufactured
artefacts. They should not be designed in a deceptive way to exploit vulnerable
users; instead their machine nature should be transparent
5. The Principle of Legal Responsibility
The person with
legal responsibility for a robot should be attributed. It should be possible
to find out who is responsible for any robot
A framework for the ethical impact assessment of information technology
This is a mock-up of what a South Korean Presidential Charter might have looked like, had a commitment in the Intelligent Robots Development Distribution Promotion Act 2008 ever been fulfilled.
Part 1: Manufacturing Standards
Part 2: Rights & Responsibilities of Users/Owners
Sec. 1: Rights and Expectations of Owners and Users
8 i) Owners have the right to be able to take control of their robot.
9 ii) Owners and users have the right to use of their robot without risk or fear of physical or psychological harm.
10 iii) Users have the right to security of their personal details and other sensitive information.
11 iv) Owners and users have the right to expect a robot to perform any task for which it has been explicitly designed (subject to Section 2 of this Charter).
Sec. 2: Responsibilities of Owners and Users
This Charter recognizes the user's right to utilize a robot in any way they see fit, so long as this use remains `fair' and `legal' within the parameters of the law. As such:
12 i) A user must not use a robot to commit an illegal act.
13 ii) A user must not use a robot in a way that may be construed as causing physical or psychological harm to an individual.
14 iii) An owner must take `reasonable precaution' to ensure that their robot does not pose a threat to the safety and well-being of individuals or their property.
Sec. 3: The following acts are an offense under Korean Law:
15 i) To deliberately damage or destroy a robot.
16 ii) Through gross negligence, to allow a robot to come to harm.
17 iii) It is a lesser but nonetheless serious offence to treat a robot in a way which may be construed as deliberately and inordinately abusive.
Part 3: Rights & Responsibilities for Robots
Sec. 1: Responsibilities of Robots
18 i) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
19 ii) A robot must obey any orders given to it by human beings, except where such orders would conflict with Part 3 Section 1 subsection ÒiÓ of this Charter.
20 iii) A robot must not deceive a human being.
Sec 2: Rights of Robots
Under Korean Law, Robots are afforded the following fundamental rights:
21 i) The right to exist without fear of injury or death.
22 ii) The right to live an existence free from systematic abuse.
PRE-CONDITIONS
1. Evaluation
All proposals that have the potential to harm
privacy must be subjected to prior evaluation against appropriate privacy
principles.
2. Consultation
All evaluation processes must feature
consultation processes with the affected public and their representative and
advocacy organisations.
3. Transparency
Sufficient information must be disclosed in
advance to enable meaningful and consultative evaluation processes to take
place.
4. Justification
All privacy-intrusive aspects must be
demonstrated to be necessary pre-conditions for the achievement of specific
positive outcomes.
5. Proportionality
The benefits arising from all
privacy-intrusive aspects must be demonstrated to be commensurate with their
financial and other costs, and the risks that they give rise to.
DESIGN
6. Mitigation
Where privacy-intrusiveness cannot be
avoided, mitigating measures must be conceived, implemented and sustained, in
order to minimise the harm caused.
7. Controls
All privacy-intrusive aspects must be subject
to controls, to ensure that practices reflect policies and procedures. Breaches
must be subject to sanctions, and the sanctions must be applied.
REVIEW
8. Audit
All privacy-intrusive aspects and their associated
justification, proportionality, transparency, mitigation measures and controls
must be subject to review, periodically and when warranted.
1 (a) Human dignity
The recognition of the inherent
human state of being worthy of respect, must not be violated by 'autonomous'
technologies. This means, for instance, that there are limits to
determinations and classifications concerning persons, made on the basis of
algorithms and `autonomous' systems, especially when those affected by them are
not informed about them. It also implies that there have to be (legal) limits
to the ways in which people can be led to believe that they are dealing with
human beings while in fact they are dealing with algorithms and smart
machines.
2 (b) Autonomy
'Autonomous' systems must not impair
freedom of human beings to set their own standards and norms and be able to
live according to them. All 'autonomous' technologies must, hence, honour the
human ability to choose whether, when and how to delegate decisions and actions
to them. This also involves the transparency and predictability of 'autonomous'
systems
3 (c) Responsibility
`Autonomous' systems should only
be developed and used in ways that serve the global social and environmental
good, as determined by outcomes of deliberative democratic processes. This
implies that they should be designed so that their effects align with a
plurality of fundamental human values and rights. Applications of AI and
robotics should not pose unacceptable risks of harm to human beings, and not
compromise human freedom and autonomy by illegitimately and surreptitiously
reducing options for and knowledge of citizens.
4 (d) Justice, equity, and solidarity
Discriminatory
biases in data sets used to train and run AI systems should be prevented or
detected, reported and neutralised at the earliest stage possible. Equal
access to `autonomous' technologies and fair distribution of benefits requires
the formulating of new models of fair distribution and benefit sharing apt to
respond to the economic transformations caused by automation, digitalisation
and AI.
5 (e) Democracy
Key decisions on the regulation of AI
development and application should be the result of democratic debate and
public engagement. Value pluralism, diversity and accommodation of a variety
of conceptions of the good life of citizens. They must not be jeopardised,
subverted or equalised by new technologies that inhibit or influence political
decision making.
6 (f) Rule of law and accountability
Rule of law,
access to justice and the right to redress and a fair trial provide the
necessary framework for ensuring the observance of human rights standards and
potential AI specific regulations. There is a need to clarify with whom
liabilities lie for damages caused by undesired behaviour of `autonomous'
systems. Moreover, effective harm mitigation systems should be in place.
7 (g) Security, safety, bodily and mental integrity
Safety and security of `autonomous' systems materialises in three
forms: (1) external safety for their environment and users, (2) reliability
and internal robustness, e.g. against hacking, and (3) emotional safety with
respect to human-machine interaction. All dimensions of safety must be taken
into account by AI developers and strictly tested before release. Special
attention should hereby be paid to persons who find themselves in a vulnerable
position. Special attention should also be paid to potential dual use and
weaponisation of AI, e.g. in cybersecurity, finance, infrastructure and armed
conflict.
8 (h) Data protection and privacy
`Autonomous' systems
must not interfere with the right to private life.
Consideration may be given to the ongoing debate about the
introduction of two new rights: the right to meaningful human contact and the
right to not be profiled, measured, analysed, coached or nudged.
9 (i) Sustainability
AI technology must be in line
with the human responsibility to ensure the basic preconditions for life on our
planet, continued prospering for mankind and preservation of a good environment
for future generations.
AHRC (2015) 'Rights and freedoms: right by right' Australian Human Rights Commission, Aprtil 2015, at http://www.humanrights.gov.au/rights-and-freedoms-right-right-0
Akiko (2012) 'South Korean Robot Ethics Charter 2012' Akiko's Blog, 2012, at https://akikok012um1.wordpress.com/south-korean-robot-ethics-charter-2012/
APF (2013) 'Meta-Principles for Privacy Protection' Australian Privacy Foundation, March 2013, at https://privacy.org.au/policies/meta-principles/
Beauchamp T.L. & Childress J.F. (1979) 'Principles of biomedical ethics' Oxford University Press, 1979
Boden M., Bryson J. et al. (2017) 'Principles of robotics: regulating robots in the real world' (of the UK Engineering and Physical Science Research Council) Connection Science 29, 2 (April 2017) 124-129, at https://www.tandfonline.com/doi/pdf/10.1080/09540091.2016.1271400
ICCPR (1996) 'International Covenant on Civil and Political Rights' United Nations, 1966, at http://treaties.un.org/doc/Publication/UNTS/Volume%20999/volume-999-I-14668-English.pdf
EGE (2018) 'Statement on Artificial Intelligence, Robotics and `Autonomous' Systems' European Group on Ethics in Science and New Technologies, European Commission, March 2018, at https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf
Marx. G.R. (1998) 'An Ethics For The New Surveillance' The Information Society 14, 3 (August 1998) 171-185
Wright D. (2011) 'A framework for the ethical impact assessment of information technology' Ethics Inf Technol 13 (2011) 199-226, at https://www.researchgate.net/profile/David_Wright24/publication/225629433_A_framework_for_the_ethical_impact_assessment_of_information_technology/links/5741c49a08ae298602ee247c/A-framework-for-the-ethical-impact-assessment-of-information-technology.pdf
Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.
Personalia |
Photographs Presentations Videos |
Access Statistics |
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax. From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 75 million in late 2024. Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer |
Xamax Consultancy Pty Ltd ACN: 002 360 456 78 Sidaway St, Chapman ACT 2611 AUSTRALIA Tel: +61 2 6288 6916 |
Created: 7 September 2018 - Last Amended: 25 September 2018 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/GAIE.html
Mail to Webmaster - © Xamax Consultancy Pty Ltd, 1995-2024 - Privacy Policy