Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Ethical Analysis and IT'

Ethical Analysis and Information Technology

Revision of 25 September 2018

Prepared in support of Guidelines for the Responsible Business Use of AI

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2018

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/EC/GAIE.html


Introduction

This page provides citations to and excerpts from 8 expressions of general ethical principles applied to technology-rich contexts contexts - including bio-medicine, surveillance, robotics and information technology.

This document is complemented by a collection of 22 further documents that express principles specifically relevant to Artificial Intelligence (AI). Together, the collection of 30 sets of principles provides a basis for a consolidated super-set of 50 Principles for Responsible AI, published in Clarke (2019). Each of the 8 documents in this set is given a score showing how many of the 50 Principles are at least modestly reflected in the document.


Contents

  1. Civil and Political Rights (ICCPR 1966)
  2. Principles of Biomedical Ethics (Beauchamp & Childress 1979)
  3. The Ethics of Surveillance (Marx 1998)
  4. Principles of Robotics (EPSRC-UK 2011)
  5. Ethical Impact Assessment of IT (Wright 2011)
  6. 'South Korean' Robot Ethics Charter (Akiko 2012)
  7. Meta-Principles for Privacy Protection (APF 2013)
  8. Statement on Artificial Intelligence, Robotics and `Autonomous' Systems (EGE 2018)

References


1. Civil and Political Rights (ICCPR 1966, AHRC 2015)
(classified as a governmental organisation)
7 / 50

  1. Right to self-determination of peoples (ICCPR Art. 1) 
  2. Rights to equality and non-discrimination (ICCPR Art. 2.1, [14], 26) 
  3. Human rights and non-citizens  (ICCPR Articles 2.1, 13) 
  4. Legislative and other measures for implementation (ICCPR Art. 2.2) 
  5. Right to an effective remedy (ICCPR Art. 2.3) 
  6. Permissible limitations on rights
  7. Equal rights of men and women (ICCPR Art. 3) 
  8. Derogation from rights in emergencies (ICCPR Art. 4) 
  9. Non-diminution of rights (ICCPR Art. 5) 
  10. Right to life (ICCPR Art. 6)
  11. Freedom from torture or cruel, degrading or inhuman treatment or punishment (ICCPR Art. 7)
  12. Freedom from slavery and forced labour (ICCPR Art. 8)
  13. Security of the person and freedom from arbitrary detention (ICCPR Art. 9)
  14. Right to humane treatment in detention (ICCPR Art. 10)
  15. Prohibition on imprisonment for inability to fulfil a contract (ICCPR Art. 11)
  16. Right to freedom of movement (ICCPR Art. 12) 
  17. Fair trial and fair hearing rights (ICCPR Art. 14.1) 
  18. Minimum guarantees in criminal proceedings (ICCPR Articles 14.2 - 14.7) 
  19. Prohibition on retrospective criminal laws (ICCPR Art. 15) 
  20. Right to recognition as a person (ICCPR Art. 16) 
  21. Freedom from interference with privacy, family, home or reputation (ICCPR Art. 17) 
  22. Freedom of thought, conscience and religion or belief (ICCPR Art. 18) 
  23. Freedom of information, opinion and expression (ICCPR Art. 19) 
  24. Prohibition of advocacy of national, racial or religious hatred (ICCPR Art. 20) 
  25. Freedom of assembly (ICCPR Art. 21) 
  26. Freedom of association (ICCPR Art. 22) 
  27. Right to respect for the family (ICCPR Art. 23.1) 
  28. Right to marry and found a family (ICCPR Art. 23.2) 
  29. Rights of parents and children (ICCPR Art. 24) 
  30. Right to name and nationality (ICCPR Art. 24)
  31. Right to participation in public affairs, voting rights and access to public service (ICCPR Art. 25) 
  32. Rights of members of ethnic, linguistic and religious minorities (ICCPR Art. 27)

2. Principles of Biomedical Ethics (Beauchamp & Childress 1979)
(classified as academic)
5 / 50

(1) Respecting autonomy

The individual has the right to act as a free agent. That is, human beings are free to decide how they live their lives as long as their decisions do not negatively impact the lives of others. Human beings also have the right to exercise freedom of thought or choice.

(2) Doing no harm (Nonmaleficence)

Our interactions with people (within the helping professions or otherwise) should not harm others. We should not engage in any activities that run the risk of harming others.

(3) Benefiting others (Beneficence)

Our actions should actively promote the health and well-being of others.

(4) Being just (Justice)

In the broadest sense of the word, this means being fair. This is especially the case when the rights of one individual or group are balanced against another. Being just, however, assumes three standards. They are impartiality, equality, and reciprocity (based on the golden rule: treat others as you wish to be treated).

(5) Being faithful (Fidelity)

Being faithful involves loyalty, truthfulness, promise keeping, and respect. This principle is related to the treatment of autonomous people. Failure to remain faithful in dealing with others denies individuals the full opportunity to exercise free choice in a relationship, therefore limiting their autonomy.


3. The Ethics of Surveillance (Marx 1998)
Questions To Help Determine The Ethics of Surveillance
(classified as academic)
23 / 50

A. The Means

1. Harm: does the technique cause unwarranted physical or psychological harm?

2. Boundary: does the technique cross a personal boundary without permission (whether involving coercion or deception or a body, relational or spatial border)?

3. Trust: does the technique violate assumptions that are made about how personal information will be treated such as no secret recordings?

4. Personal relationships: is the tactic applied in a personal or impersonal setting?

5. Invalidity: does the technique produce invalid results?

B. The Data Collection Context

6. Awareness: are individuals aware that personal information is being collected, who seeks it and why?

7. Consent: do individuals consent to the data collection?

8. Golden rule: would those responsbile for the surveillance (both the decision to apply it and its actual application) agree to be its subjects under the conditions in which they apply it to others?

9. Minimization: does a principle of minimization apply?

10. Public decision-making: was the decision to use a tactic arrived at through some public discussion and decision making process?

11. Human review: is there human review of machine generated results?

12. Right of inspection: are people aware of the findings and how they were created?

13. Right to challenge and express a grievance: are there procedures for challenging the results, or for entering alternative data or interpretations into the record?

14. Redress and sanctions: if the individual has been treated unfairly and procedures violated, are there appropriate means of redress? Are there means for discovering violations and penalties to encourage responsible surveillant behavior?

15. Adequate data stewardship and protection: can the security of the data be adequately protected?

16. Equality-inequality regarding availability and application: a) is the means widely available or restricted to only the most wealthy, powerful or technologically sophisticated? b) within a setting is the tactic broadly applied to all people or only to those less powerful or unable to resist c) if there are means of resisting the provision of personal information are these equally available, or restricted to the most privileged?

17. The symbolic meaning of a method: what does the use of a method communicate more generally?

18. The creation of unwanted precedents: is it likely to create precedents that will lead to its application in undesirable ways?

19. Negative effects on surveillors and third parties: are there negative effects on those beyond the subject?

C. Uses

20. Beneficiary: does application of the tactic serve broad community goals, the goals of the object of surveillance or the personal goals of the data collector?

21. Proportionality: is there an appropriate balance between the importance of the goal and the cost of the means?

22. Alternative means: are other less costly means available?

23. Consequences of inaction: where the means are very costly, what are the consequences of taking no surveillance action?

24. Protections: are adequate steps taken to minimize costs and risk?

25. Appropriate vs. inappropriate goals: are the goals of the data collection legitimate?

26. The goodness of fit between the means and the goal: is there a clear link between the information collected and the goal sought?

27. Information used for original vs. other unrelated purposes: is the personal information used for the reasons offered for its collection and for which consent may have been given and does the data stay with the original collector, or does it migrate elsewhere?

28. Failure to share secondary gains from the information: is the personal data collected used for profit without permission from, or benefit to, the person who provided it?

29. Unfair disadvantage: is the information used in such a way as to cause unwarranted harm or disadvantage to its subject?


4. Principles of Robotics (UK EPSRC 2011, republished in Boden et al. 2017)
(classified as academic)
9 / 50

These were nominally for "regulating robots in the real world", but are qualified so heavily that they fall far short of representing a practical resource.

1. The Principle of Killing
Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security

2. The Principle of Compliance
Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws, fundamental rights & freedoms, including privacy

3. The Principle of Commoditisation
Robots are products. They should be designed using processes which assure their safety and security

4. The Principle of Transparency
Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent

5. The Principle of Legal Responsibility
The person with legal responsibility for a robot should be attributed. It should be possible to find out who is responsible for any robot


5. Ethical Impact Assessment of IT (Wright 2011)
(classified as academic)
9 / 50

A framework for the ethical impact assessment of information technology

  1. Respect for autonomy (right to liberty)
    Freedom from both controlling interference by others and from limitations, such as inadequate understanding, that prevent meaningful choice
  2. Dignity
    Freedom from exploitation and physical or mental abuse; and scope for participation in the formulation and implementation of policies that directly affect well- being
  3. Consent
    Informed, freely-given and documented
  4. Nonmaleficence
    Refrain from actions that cause harm
    1. Safety
    2. Social solidarity, inclusion and exclusion
    3. Isolation and substitution of human contact
    4. Discrimination and social sorting
  5. Beneficence
    Provide benefits, and balance benefits and drawbacks to produce the best overall results
    1. Universal service
    2. Accessibility
    3. Value sensitive design
    4. Sustainability
    5. Justice
    6. Equality and fairness (social justice)
  6. Privacy and data protection
    1. Data quality
    2. Purpose specification
    3. Use limitation
    4. Confidentiality, security and protection of data
    5. Transparency (openness)
    6. Individual participation and access to data
    7. Anonymity
    8. Privacy of personal communications: monitoring and location tracking
    9. Privacy of the person
    10. Privacy of personal behaviour

6. 'South Korean' Robot Ethics Charter (Akiko 2012)
(classified as a governmental organisation)
10 / 50

This is a mock-up of what a South Korean Presidential Charter might have looked like, had a commitment in the Intelligent Robots Development Distribution Promotion Act 2008 ever been fulfilled.

Part 1: Manufacturing Standards

  1. a) Robot manufacturers must ensure that the autonomy of the robots they design is limited; in the event that it becomes necessary, it must always be possible for a human being to assume control over a robot.
  2. b) Robot manufacturers must maintain strict standards of quality control, taking all reasonable steps are taken to ensure that the risk of death or injury to the user is minimized, and the safety of the community guaranteed.
  3. c) Robot manufacturers must take steps to ensure that the risk of psychological harm to users is minimized. `Psychological harm' in this sense includes any likelihood for the robot to induce antisocial or sociopathic behaviors, depression or anxiety, stress, and particularly addictions (such as gambling addiction).
  4. c) Robot manufacturers must ensure their product is clearly identifiable, and that this identification is protected from alteration.
  5. d) Robots must be designed so as to protect personal data, through means of encryption and secure storage.
  6. e) Robots must be designed so that their actions (online as well as real-world) are traceable at all times.
  7. f) Robot design must be ecologically sensitive and sustainable.

Part 2: Rights & Responsibilities of Users/Owners

Sec. 1: Rights and Expectations of Owners and Users

8 i) Owners have the right to be able to take control of their robot.

9 ii) Owners and users have the right to use of their robot without risk or fear of physical or psychological harm.

10 iii) Users have the right to security of their personal details and other sensitive information.

11 iv) Owners and users have the right to expect a robot to perform any task for which it has been explicitly designed (subject to Section 2 of this Charter).

Sec. 2: Responsibilities of Owners and Users

This Charter recognizes the user's right to utilize a robot in any way they see fit, so long as this use remains `fair' and `legal' within the parameters of the law. As such:

12 i) A user must not use a robot to commit an illegal act.

13 ii) A user must not use a robot in a way that may be construed as causing physical or psychological harm to an individual.

14 iii) An owner must take `reasonable precaution' to ensure that their robot does not pose a threat to the safety and well-being of individuals or their property.

Sec. 3: The following acts are an offense under Korean Law:

15 i) To deliberately damage or destroy a robot.

16 ii) Through gross negligence, to allow a robot to come to harm.

17 iii) It is a lesser but nonetheless serious offence to treat a robot in a way which may be construed as deliberately and inordinately abusive.

Part 3: Rights & Responsibilities for Robots

Sec. 1: Responsibilities of Robots

18 i) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

19 ii) A robot must obey any orders given to it by human beings, except where such orders would conflict with Part 3 Section 1 subsection ÒiÓ of this Charter.

20 iii) A robot must not deceive a human being.

Sec 2: Rights of Robots

Under Korean Law, Robots are afforded the following fundamental rights:

21 i) The right to exist without fear of injury or death.

22 ii) The right to live an existence free from systematic abuse.


7. Meta-Principles for Privacy Protection (APF 2013)
(classified as a non-governmental organisation)
13 / 50

PRE-CONDITIONS

1. Evaluation
All proposals that have the potential to harm privacy must be subjected to prior evaluation against appropriate privacy principles.

2. Consultation
All evaluation processes must feature consultation processes with the affected public and their representative and advocacy organisations.

3. Transparency
Sufficient information must be disclosed in advance to enable meaningful and consultative evaluation processes to take place.

4. Justification
All privacy-intrusive aspects must be demonstrated to be necessary pre-conditions for the achievement of specific positive outcomes.

5. Proportionality
The benefits arising from all privacy-intrusive aspects must be demonstrated to be commensurate with their financial and other costs, and the risks that they give rise to.

DESIGN

6. Mitigation
Where privacy-intrusiveness cannot be avoided, mitigating measures must be conceived, implemented and sustained, in order to minimise the harm caused.

7. Controls
All privacy-intrusive aspects must be subject to controls, to ensure that practices reflect policies and procedures. Breaches must be subject to sanctions, and the sanctions must be applied.

REVIEW

8. Audit
All privacy-intrusive aspects and their associated justification, proportionality, transparency, mitigation measures and controls must be subject to review, periodically and when warranted.


8. European Commission's Statement on
Artificial Intelligence, Robotics and `Autonomous' Systems (EGE 2018)
(classified as a governmental organisation)
4 / 50

Ethical principles and democratic prerequisites

1 (a) Human dignity
The recognition of the inherent human state of being worthy of respect, must not be violated by 'autonomous' technologies. This means, for instance, that there are limits to determinations and classifications concerning persons, made on the basis of algorithms and `autonomous' systems, especially when those affected by them are not informed about them. It also implies that there have to be (legal) limits to the ways in which people can be led to believe that they are dealing with human beings while in fact they are dealing with algorithms and smart machines.

2 (b) Autonomy
'Autonomous' systems must not impair freedom of human beings to set their own standards and norms and be able to live according to them. All 'autonomous' technologies must, hence, honour the human ability to choose whether, when and how to delegate decisions and actions to them. This also involves the transparency and predictability of 'autonomous' systems

3 (c) Responsibility
`Autonomous' systems should only be developed and used in ways that serve the global social and environmental good, as determined by outcomes of deliberative democratic processes. This implies that they should be designed so that their effects align with a plurality of fundamental human values and rights. Applications of AI and robotics should not pose unacceptable risks of harm to human beings, and not compromise human freedom and autonomy by illegitimately and surreptitiously reducing options for and knowledge of citizens.

4 (d) Justice, equity, and solidarity
Discriminatory biases in data sets used to train and run AI systems should be prevented or detected, reported and neutralised at the earliest stage possible. Equal access to `autonomous' technologies and fair distribution of benefits requires the formulating of new models of fair distribution and benefit sharing apt to respond to the economic transformations caused by automation, digitalisation and AI.

5 (e) Democracy
Key decisions on the regulation of AI development and application should be the result of democratic debate and public engagement. Value pluralism, diversity and accommodation of a variety of conceptions of the good life of citizens. They must not be jeopardised, subverted or equalised by new technologies that inhibit or influence political decision making.

6 (f) Rule of law and accountability
Rule of law, access to justice and the right to redress and a fair trial provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulations. There is a need to clarify with whom liabilities lie for damages caused by undesired behaviour of `autonomous' systems. Moreover, effective harm mitigation systems should be in place.

7 (g) Security, safety, bodily and mental integrity
Safety and security of `autonomous' systems materialises in three forms: (1) external safety for their environment and users, (2) reliability and internal robustness, e.g. against hacking, and (3) emotional safety with respect to human-machine interaction. All dimensions of safety must be taken into account by AI developers and strictly tested before release. Special attention should hereby be paid to persons who find themselves in a vulnerable position. Special attention should also be paid to potential dual use and weaponisation of AI, e.g. in cybersecurity, finance, infrastructure and armed conflict.

8 (h) Data protection and privacy
`Autonomous' systems must not interfere with the right to private life. Consideration may be given to the ongoing debate about the introduction of two new rights: the right to meaningful human contact and the right to not be profiled, measured, analysed, coached or nudged.

9 (i) Sustainability
AI technology must be in line with the human responsibility to ensure the basic preconditions for life on our planet, continued prospering for mankind and preservation of a good environment for future generations.


References

AHRC (2015) 'Rights and freedoms: right by right' Australian Human Rights Commission, Aprtil 2015, at http://www.humanrights.gov.au/rights-and-freedoms-right-right-0

Akiko (2012) 'South Korean Robot Ethics Charter 2012' Akiko's Blog, 2012, at https://akikok012um1.wordpress.com/south-korean-robot-ethics-charter-2012/

APF (2013) 'Meta-Principles for Privacy Protection' Australian Privacy Foundation, March 2013, at https://privacy.org.au/policies/meta-principles/

Beauchamp T.L. & Childress J.F. (1979) 'Principles of biomedical ethics' Oxford University Press, 1979

Boden M., Bryson J. et al. (2017) 'Principles of robotics: regulating robots in the real world' (of the UK Engineering and Physical Science Research Council) Connection Science 29, 2 (April 2017) 124-129, at https://www.tandfonline.com/doi/pdf/10.1080/09540091.2016.1271400

ICCPR (1996) 'International Covenant on Civil and Political Rights' United Nations, 1966, at http://treaties.un.org/doc/Publication/UNTS/Volume%20999/volume-999-I-14668-English.pdf

EGE (2018) 'Statement on Artificial Intelligence, Robotics and `Autonomous' Systems' European Group on Ethics in Science and New Technologies, European Commission, March 2018, at https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf

Marx. G.R. (1998) 'An Ethics For The New Surveillance' The Information Society 14, 3 (August 1998) 171-185

Wright D. (2011) 'A framework for the ethical impact assessment of information technology' Ethics Inf Technol 13 (2011) 199-226, at https://www.researchgate.net/profile/David_Wright24/publication/225629433_A_framework_for_the_ethical_impact_assessment_of_information_technology/links/5741c49a08ae298602ee247c/A-framework-for-the-ethical-impact-assessment-of-information-technology.pdf


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 7 September 2018 - Last Amended: 25 September 2018 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/GAIE.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy