Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Principles for AI'

Principles for AI: A SourceBook

Revision of 15 April 2019, with editorial corrections of 26 August 2020
(Addition of JSAI, Sony, AustGovt, EC 2019, and Update of IBM, MS)

Prepared in support of Guidelines for the Responsible Business Use of AI

This supersedes the version of 10 February 2019

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2018-19

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/EC/GAIP.html


Introduction

During the current round of industry enthusiasm for Artificial Intelligence (AI), ambitious claims by technology providers have stimulated widespread public concern. IT suppliers, user organisations in business and government, and associations representing them, are naturally concerned about the prospect of regulation constraining their activities. During the period 2016-19, there has accordingly been a concerted effort by a wide variety of organisations to calm the public's nerves. This has included the publication of 'principles' and 'guidelines' for the implementation of AI. In addition, a number of sets of principles have been published by advocates for the public interest.

Most collections have been put together by, or depend heavily on, organisations that have a vested interest in developing, investing in and/or applying AI. Most collections have involved little or no effective engagement with advocates for the interests of the public. Moreover, the documents impose no actual obligations on any organisation to do or not do anything, and are not capable of being enforced. Any influence they have will derive from the hovering threat of deep public disquiet.

On the other hand, many of these documents have been developed by well-resourced organisations that have access to researchers, developers and implementors of various AI technologies. In extracting the documents' information content, considerable care is needed, in order to appreciate sub-texts, to consider why statements are framed as they are, to understand the effects of qualifying words, and to identify aspects that are entirely missing. Provided that such care is brought to the activity, there is a great deal of value to be extracted from these documents.

This document includes citations to and excerpts from 22 such documents. This document is complemented by a collection of 8 further documents that present principles arising from more general ethical analysis of IT's impacts. Together, the collection of 30 sets of principles provides a basis for a consolidated super-set of 50 Principles for Responsible AI, published in Clarke (2019). Each of the 22 documents in this set is given a score showing how many of the 50 Principles are at least modestly reflected in the document.


Contents

  1. Asimov (1942, 1993)
  2. British Standards Institute (2016)
  3. European Parliament (2016)
  4. The Greens / European Free Alliance Digital Working Group (Nov 2016)
  5. Association for Computing Machinery - ACM (Jan 2017)
  6. Future of Life Institute (Jan 2017)
  7. Japanese Society for Artificial Intelligence (May 2017)
  8. Internet Society (Apr 2017)
  9. Japanese Ministry of Internal Affairs and Communications (Oct 2017)
  10. Information Technology Industry Council (Oct 2017)
  11. UNI Global Union (Dec 2017)
  12. IEEE (Dec 2017)
  13. House of Lords (Apr 2018)
  14. Partnership on AI (Apr 2018)
  15. Google (Jun 2018)
  16. IBM (Sep 2018)
  17. The Public Voice (Oct 2018)
  18. The European Commission's Draft Guidelines (Nov 2018)
  19. Sony (Mar 2019)
  20. Australian Government (Apr 2019)
  21. Microsoft (Apr 2019)
  22. The European Commission's Guidelines (Apr 2019)

References


1. Asimov (1942, 1993)
Asimov's Laws of Robotics (Asimov 1942), as extended by Asimov's fiction (1942-1992), as interpreted in Clarke (1993)
(classified as a non-governmental organisation)
5 / 50

  1. The Meta-Law
    A robot may not act unless its actions are subject to the Laws of Robotics
  2. Law Zero
    A robot may not injure humanity, or, through inaction, allow humanity to come to harm
  3. Law One
    A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher-order Law
  4. Law Two
    (a) A robot must obey orders given it by human beings, except where such orders would conflict with a higher-order Law
    (b) A robot must obey orders given it by superordinate robots, except where such orders would conflict with a higher-order Law
  5. Law Three
    (a) A robot must protect the existence of a superordinate robot as long as such protection does not conflict with a higher-order Law
    (b) A robot must protect its own existence as long as such protection does not conflict with a higher-order Law
  6. Law Four
    A robot must perform the duties for which it has been programmed, except where that would conflict with a higher-order law
  7. The Procreation Law
    A robot may not take any part in the design or manufacture of a robot unless the new robot's actions are subject to the Laws of Robotics

2. British Standards Institute (2016)
Guide to the ethical design and application of robots and robotic systems (BS 2016), as reported in Devlin (2016)
(classified as an industry association)
6 / 50

  1. Robots should not be designed solely or primarily to kill or harm humans
  2. Humans, not robots, are the responsible agents
  3. It should be possible to find out who is responsible for any robot and its behaviour
  4. Designers should aim for transparency
  5. Care is needed in relation to deceptive conduct by robots

3. European Parliament (2016)
Recommendations on Civil Law Rules on Robotics CLA-EP (2016)
(classified as a governmental organisation)
16 / 50

  1. Beneficence
    Robots should act in the best interests of humans
  2. Non-maleficence
    The doctrine of 'first, do no harm', whereby robots should not harm a human
  3. Autonomy
    The capacity to make an informed, un-coerced decision about the terms of interaction with robots
  4. Justice
    Fair distribution of the benefits associated with robotics and affordability of homecare and healthcare robots in particular.
  5. Fundamental Rights
    Robotics research activities should respect fundamental rights and be conducted in the interests of the well-being of individuals and society in their design, implementation, dissemination and use. Human dignity - both physical and psychological - is always to be respected.
  6. Precaution
    Robotics research activities should be conducted in accordance with the precautionary principle, anticipating potential safety impacts of outcomes and taking due precautions, proportional to the level of protection, while encouraging progress for the benefit of society and the environment.
  7. Inclusiveness
    Robotics engineers guarantee transparency and respect for the legitimate right of access to information by all stakeholders. Inclusiveness allows for participation in decision-making processes by all stakeholders involved in or concerned by robotics research activities.
  8. Accountability
    Robotics engineers should remain accountable for the social, environmental and human health impacts that robotics may impose on present and future generations.
  9. Safety
    Robot designers should consider and respect people's physical wellbeing, safety, health and rights. A robotics engineer must preserve human wellbeing, while also respecting human rights, and disclose promptly factors that might endanger the public or the environment.
  10. Reversibility
    Reversibility, being a necessary condition of controllability, is a fundamental concept when programming robots to behave safely and reliably. A reversibility model tells the robot which actions are reversible and how to reverse them if they are. The ability to undo the last action or a sequence of actions allows users to undo undesired actions and get back to the 'good' stage of their work.
  11. Privacy
    The right to privacy must always be respected. A robotics engineer should ensure that private information is kept secure and only used appropriately. Moreover, a robotics engineer should guarantee that individuals are not personally identifiable, aside from exceptional circumstances and then only with clear, unambiguous informed consent. Human informed consent should be pursued and obtained prior to any man-machine interaction. As such, robotics designers have a responsibility to develop and follow procedures for valid consent, confidentiality, anonymity, fair treatment and due process. Designers will comply with any requests that any related data be destroyed, and removed from any datasets.
  12. Maximising benefit and minimising harm
    Researchers should seek to maximise the benefits of their work at all stages, from inception through to dissemination. Harm to research participants/human subject/an experiment, trial, or study participant or subject must be avoided. Where risks arise as an unavoidable and integral element of the research, robust risk assessment and management protocols should be developed and complied with. Normally, the risk of harm should be no greater than that encountered in ordinary life, i.e. people should not be exposed to risks greater than or additional to those to which they are exposed in their normal lifestyles. The operation of a robotics system should always be based on a thorough risk assessment process, which should be informed by the precautionary and proportionality principles.

4. The Greens / European Free Alliance Digital Working Group (Nov 2016)
Position on Robotics and AI (GEFA 2016)
(classified as a non-governmental organisation)
17 / 50

  1. An informed public debate
    Public input and an informed debate is of the utmost importance, with the aim of shaping the technological revolution so that it serves humanity with a series of rules, governing, in particular, liability and ethics
  2. Precautionary principle
    Robots and artificial intelligence should be developed and produced based on an impact assessment, to the best available technical standards regarding security and with the possibility to intervene. Apply the precautionary principle and assess the long term ethical implications of new technologies in the early phase of their development
  3. Do no harm-principle
    Robots should not be designed to kill or harm humans. Their use must take place according to guaranteed individual rights and fundamental rights, including privacy by design and in particular human integrity, human dignity and identity. We underline the primacy of the human being over the sole interest of science or society.
  4. Ecological footprint
    Apply the principles of regenerative design, increase energy efficiency by promoting the use of renewable technologies for robotics, the use and reuse of secondary raw materials, and the reduction of waste
  5. Enhancements
    The provision of social or health services should not depend on the acceptance of robotics and artificial intelligence as implants or extensions to the human body. Inclusion and diversity must be the highest priority of our societies. The dignity of persons with or without disabilities is inviolable.
  6. Autonomy of persons
    The right to information and consent must be protected, including the protection of persons who are not able to consent. We reject the notion of Òdata ownershipÓ, which would run counter to data protection as a fundamental right and treat data as a tradable commodity
  7. Clear liabilities
    Legal responsibility should be attributed to a person. Regarding safety and security, producers shall be held responsible despite any existing non-liability clauses in user agreements. The unintended nature of possible damages should not automatically exonerate manufacturers, programmers or operators from their liability and responsibility. In order to reduce possible repercussions of failure and malfunctioning of sufficiently complex systems, we think that strict liability concepts should be evaluated, including compulsory insurance policies.
  8. Open environment
    We promote an open environment, from open standards and innovative licensing models, to open platforms and transparency, in order to avoid vendor lock-in that restrains interoperability
  9. Product safety
    Design robotics artificial intelligence products to be safe, secure and fit for purpose. Robots and AI should not exploit vulnerable users.
  10. Funding
    The European Union and its Member States should fund research to that end in particular with regards to the ethical and legal effects of artificial intelligence.

5. Association for Computing Machinery (Jan 2017)
Principles for Algorithmic Transparency and Accountability (ACM 2017)
(classified as a professional association)
5 / 50

  1. Awareness
    Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress
    Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability
    Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation
    Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance
    A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.
  6. Auditability
    Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing
    Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.

6. Future of Life Institute (Jan 2017)
Asilomar AI Principles (FLI 2017)
(classified as a joint association)
11 / 50

Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems' power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people's real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.


7. Japanese Society for Artificial Intelligence (May 2017)
Ethical Guidelines (JSAI 2017)
(classified as an industry assocation)
7 / 50

1 Contribution to humanity

Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. They will protect basic human rights and will respect cultural diversity. As specialists, members of the JSAI need to eliminate the threat to human safety whilst designing, developing, and using AI.

2 Abidance of laws and regulations

Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not bring harm to others through violation of information or properties belonging to others. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.

3 Respect for the privacy of others

Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.

4 Fairness

Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. Members of the JSAI will, to the best of their ability, ensure that AI is developed as a resource that can be used by humanity in a fair and equal manner.

5 Security

As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. In the development and use of AI, members of the JSAI will always pay attention to safety, controllability, and required confidentiality while ensuring that users of AI are provided appropriate and sufficient information.

6 Act with integrity

Members of the JSAI are to acknowledge the significant impact which AI can have on society. They will therefore act with integrity and in a way that can be trusted by society. As specialists, members of the JSAI will not assert false or unclear claims and are obliged to explain the technical limitations or problems in AI systems truthfully and in a scientifically sound manner.

7 Accountability and Social Responsibility

Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. In the event that potential danger is identified, a warning must be effectively communicated to all of society. Members of the JSAI will understand that their research and development can be used against their knowledge for the purposes of harming others, and will put in efforts to prevent such misuse. If misuse of AI is discovered and reported, there shall be no loss suffered by those who discover and report the misuse.

8 Communication with society and self-development

Members of the JSAI must aim to improve and enhance society's understanding of AI. Members of the JSAI understand that there are diverse views of AI within society, and will earnestly learn from them. They will strengthen their understanding of society and maintain consistent and effective communication with them, with the aim of contributing to the overall peace and happiness of mankind. As highly-specialized professionals, members of the JSAI will always strive for self-improvement and will also support others in pursuing the same goal.

9 Abidance of ethics guidelines by AI

AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.


8. Internet Society (Apr 2017)
Guiding Principles and Recommendations re AI and Machine Learning (ISOC 2017)
(classified as a non-governmental organisation)
11 / 50

  1. Ethical Considerations in Deployment and Design
    AI system designers and builders need to apply a user-centric approach to the technology. They need to consider their collective responsibility in building AI systems that will not pose security risks to the Internet and Internet users - Adopt ethical standards
  2. Ensure Interpretability of AI systems
    Decisions made by an AI agent should be possible to understand, especially if those decisions have implications for public safety, or result in discriminatory practices - Ensure Human Interpretability of Algorithmic Decisions and Empower Users
  3. Public Empowerment
    The public's ability to understand AI-enabled services, and how they work, is key to ensuring trust in the technology - 'Algorithmic Literacy' must be a basic skill, and Provide the public with information
  4. Responsible Deployment
    The capacity of an AI agent to act autonomously, and to adapt its behavior over time without human direction, calls for significant safety checks before deployment, and ongoing monitoring - Humans must be in control, Make safety a priority, Privacy is key, Think before you act, If they are connected, they must be secured, Responsible disclosure
  5. Ensuring Accountability
    Legal accountability has to be ensured when human agency is replaced by decisions of AI agents - Ensure legal certainty, Put users first and Assign liability up-front
  6. Social and Economic Impacts
    Stakeholders should shape an environment where AI provides socio-economic opportunities for all - All stakeholders should engage in an ongoing dialogue
  7. Open Governance
    The ability of various stakeholders, whether civil society, government, private sector or academia and the technical community, to inform and participate in the governance of AI is crucial for its safe deployment - Promote Multistakeholder Governance

9. Japanese Ministry of Internal Affairs and Communications (Oct 2017)
AI R&D Guidelines, as reported in Hirano (2017)
(classified as a governmental organisation)
11 / 50

  1. I. Collaboration
    Pay attention to the interconnectivity and interoperability of AI systems.
  2. II. Transparency
    Pay attention to the verifiability of inputs/outputs of AI systems and explainability of their decisions.
  3. III. Controllability
    Pay attention to the controllability of AI systems.
  4. IV. Safety
    Take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices.
  5. V. Security
    Pay attention to the security of AI systems.
  6. VI. Privacy
    Take it into consideration that AI systems will not infringe the privacy of users or third parties.
  7. VII. Ethics
    Respect human dignity and individual autonomy in R&D of AI systems.
  8. VIII. User Assistance
    Take it into consideration that AI systems will support users and make it possible to give them opportunities for choice in appropriate manners.
  9. IX. Accountability
    Make efforts to fulfill their accountability to stakeholders including users of AI systems.

10. Information Technology Industry Council (Oct 2017)
AI Policy Principles (ITIC 2017)
(classified as an industry association)
6 / 50

  1. Responsible Design and Deployment
    We recognize our responsibility to integrate principles into the design of AI technologies, beyond compliance with existing laws. While the potential benefits to people and society are amazing, AI researchers, subject matter experts, and stakeholders should and do spend a great deal of time working to ensure the responsible design and deployment of AI systems. Highly autonomous AI systems must be designed consistent with international conventions that preserve human dignity, rights, and freedoms. As an industry, it is our responsibility to recognize potentials for use and misuse, the implications of such actions, and the responsibility and opportunity to take steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design.
  2. Safety and Controllability
    Technologists have a responsibility to ensure the safe design of AI systems. Autonomous AI agents must treat the safety of users and third parties as a paramount concern, and AI technologies should strive to reduce risks to humans. Furthermore, the development of autonomous AI systems must have safeguards to ensure controllability of the AI system by humans, tailored to the specific context in which a particular system operates.
  3. Robust and Representative Data
    To promote the responsible use of data and ensure its integrity at every stage, industry has a responsibility to understand the parameters and characteristics of the data, to demonstrate the recognition of potentially harmful bias, and to test for potential bias before and throughout the deployment of AI systems. AI systems need to leverage large datasets, and the availability of robust and representative data for building and improving AI and machine learning systems is of utmost importance.
  4. Interpretability
    We are committed to partnering with others across government, private industry, academia, and civil society to find ways to mitigate bias, inequity, and other potential harms in automated decision-making systems. Our approach to nding such solutions should be tailored to the unique risks presented by the specific context in which a particular system operates. In many contexts, we believe tools to enable greater interpretability will play an important role.
  5. Liability of AI Systems Due to Autonomy
    The use of AI to make autonomous consequential decisions about people, informed by - but often replacing decisions made by - human-driven bureaucratic processes, has led to concerns about liability. Acknowledging existing legal and regulatory frameworks, we are committed to partnering with relevant stakeholders to inform a reasonable accountability framework for all entities in the context of autonomous systems.

11. UNI Global Union (Dec 2017)
Top 10 Principles for Ethical AI (UGU 2017)
(classified as a non-governmental organisation)
11 / 50

  1. Demand That AI Systems Are Transparent
    A transparent artificial intelligence system is one in which it is possible to discover how, and why, the system made a decision, or in the case of a robot, acted the way it did.
  2. Equip AI Systems With an 'Ethical Black Box'
    Full transparency in an AI system should be facilitated by the presence of a device that can record information about said system in the form of an 'ethical black box' that not only contains relevant data to ensure transparency and accountability of a system, but also includes clear data and information on the ethical considerations built into said system.
  3. Make AI Serve People and Planet
    This includes codes of ethics for the development, application and use of AI so that throughout their entire operational process, AI systems remain compatible and increase the principles of human dignity, integrity, freedom, privacy and cultural and gender diversity, as well as with fundamental human rights. In addition, AI systems must protect and even improve our planet's ecosystems and biodiversity.
  4. Adopt a Human-In-Command Approach
    An absolute precondition is that the development of AI must be responsible, safe and useful, where machines maintain the legal status of tools, and legal persons retain control over, and responsibility for, these machines at all times.
  5. Ensure a Genderless, Unbiased AI
    In the design and maintenance of AI, it is vital that the system is controlled for negative or harmful human-bias, and that any bias - be it gender, race, sexual orientation, age, etc. - is identified and is not propagated by the system.
  6. Share the Benefits of AI Systems
    AI technologies should benefit and empower as many people as possible. The economic prosperity created by AI should be distributed broadly and equally, to benefit all of humanity.
  7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
    As AI systems develop and augmented realities are formed, workers and work tasks will be displaced. To ensure a just transition, as well as sustainable future developments, it is vital that corporate policies are put in place that ensure corporate accountability in relation to this displacement, such as retraining programmes and job change possibilities. Governmental measures to help displaced workers retrain and nd new employment are additionally required.
  8. Establish Global Governance Mechanisms
    UNI recommends the establishment of multi-stakeholder Decent Work and Ethical AI governance bodies on global and regional levels. The bodies should include AI designers, manufacturers, owners, developers, researchers, employers, lawyers, CSOs and trade unions. Whistleblowing mechanisms and monitoring procedures to ensure the transition to, and implementation of, ethical AI must be established. The bodies should be granted the competence to recommend compliance processes and procedures.
  9. Ban the Attribution of Responsibility to Robots
    Robots should be designed and operated as far as is practicable to comply with existing laws, fundamental rights and freedoms, including privacy. This is linked to the question of legal responsibility. In line with Bryson et al 2011, UNI Global Union asserts that legal responsibility for a robot should be attributed to a person. Robots are not responsible parties under the law.
  10. Ban AI Arms Race
    Lethal autonomous weapons, including cyber warfare, should be banned.

12. IEEE (Dec 2017)
Principles for Autonomous and Intelligent Systems (A/IS) (IEEE 2017, pp.6-7, 22-32)
(classified as a professional association)
3 / 50

Principle 1 -- Human Rights

(1) Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A/IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A/IS.

(2) A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks.

(3) For the foreseeable future, A/IS should not be granted rights and privileges equal to human rights, A/IS should always be subordinate to human judgment and control.

Principle 2 -- Prioritizing Well-being

A/IS should prioritize human well-being as an outcome in all system designs, using the best available, and widely accepted, well-being metrics as their reference point. [The discussion appears to be primarily concerned with economic wellbeing]

Principle 3 -- Accountability

(1) Legislatures/courts should clarify issues of responsibility, culpability, liability, and accountability for A/IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations).

(2) Designers and developers of A/IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A/IS.

(3) Multi-stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist ... (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.).

(4) Systems for registration and record-keeping should be created so that it is always possible to find out who is legally responsible for a particular A/IS. Manufacturers/operators/ owners of A/IS should register key, high-level parameters, including Training data/training environment (if applicable), Sensors/real world data sources, Algorithms, Process graphs, Model features (at various levels), User interfaces, Actuators/outputs, Optimization goal/loss function/reward function

Principle 4 -- Transparency

Develop new standards that describe measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined. For designers, such standards will provide a guide for self-assessing transparency during development and suggest mechanisms for improving transparency.

Principle 5 -- A/IS Technology Misuse and Awareness of it

Minimize the risks of misuse of A/IS by raising public awareness, providing ethics education, and educating government, lawmakers and enforcement agencies [but with no mention of obligations, sanctions and enforcement]


13. House of Lords Select Committee on Artificial Intelligence (Apr 2018)
Core Principles for AI, buried inside HoL (2018),
as extracted for the World Economic Forum (WEF) (Smith 2018)
(classified as a governmental organisation)
5 / 50

A WEF document claims that these "core principles" derive from a report commissioned by the House of Lords AI Select Committee, which is based on evidence from over 200 industry experts - most of whom presumably has at least a degree of self-interest in the outcome.

(1) AI must be a force for good - and diversity

The first principle argues that AI should be developed for the common good and benefit of humanity.

The report's authors argue the United Kingdom must actively shape the development and utilisation of AI, and call for "a shared ethical AI framework" that provides clarity against how this technology can best be used to benefit individuals and society.

They also say the prejudices of the past must not be unwittingly built into automated systems, and urge that such systems "be carefully designed from the beginning, with input from as diverse a group of people as possible".

(2) Intelligibility and fairness

The second principle demands that AI operates within parameters of intelligibility and fairness, and calls for companies and organisations to improve the intelligibility of their AI systems.

"Without this, regulators may need to step in and prohibit the use of opaque technology in significant and sensitive areas of life and society", the report warns.

(3) No Diminution of Data Rights or Privacy

Third, the report says artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

It says the ways in which data is gathered and accessed need to be reconsidered. This, the report says, is designed to ensure companies have fair and reasonable access to data, while citizens and consumers can also protect their privacy.

"Large companies which have control over vast quantities of data must be prevented from becoming overly powerful within this landscape. We call on the government ... to review proactively the use and potential monopolisation of data by big technology companies operating in the UK".

(4) Flourishing alongside AI

The fourth principle stipulates all people should have the right to be educated as well as be enabled to flourish mentally, emotionally and economically alongside artificial intelligence.

For children, this means learning about using and working alongside AI from an early age. For adults, the report calls on government to invest in skills and training to negate the disruption caused by AI in the jobs market.

(5) Confronting the power to destroy

Fifth, and aligning with concerns around killer robots, the report says the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

"There is a significant risk that well-intended AI research will be misused in ways which harm people," the report says. "AI researchers and developers must consider the ethical implications of their work".


14. Partnership on AI (Apr 2018)
Our Work (Thematic Pillars) (PoAI 2018)
(classified as a joint association)
5 / 50

1 Safety-Critical AI

Advances in AI have the potential to improve outcomes, enhance quality, and reduce costs in such safety-critical areas as healthcare and transportation. Effective and careful applications of pattern recognition, automated decision making, and robotic systems show promise for enhancing the quality of life and preventing thousands of needless deaths.

However, where AI tools are used to supplement or replace human decision-making, we must be sure that they are safe, trustworthy, and aligned with the ethics and preferences of people who are influenced by their actions.

We will pursue studies and best practices around the fielding of AI in safety-critical application areas.

2 Fair, Transparent, and Accountable AI

AI has the potential to provide societal value by recognizing patterns and drawing inferences from large amounts of data. Data can be harnessed to develop useful diagnostic systems and recommendation engines, and to support people in making breakthroughs in such areas as biomedicine, public health, safety, criminal justice, education, and sustainability.

While such results promise to provide real benefits, we need to be sensitive to the possibility that there are hidden assumptions and biases in data, and therefore in the systems built from that data - in addition to a wide range of other system choices which can be impacted by biases, assumptions, and limits. This can lead to actions and recommendations that replicate those biases, and have serious blind spots.

Researchers, officials, and the public should be sensitive to these possibilities and we should seek to develop methods that detect and correct those errors and biases, not replicate them. We also need to work to develop systems that can explain the rationale for inferences.

We will pursue opportunities to develop best practices around the development and fielding of fair, explainable, and accountable AI systems.

3 AI, Labor, and the Economy

AI advances will undoubtedly have multiple influences on the distribution of jobs and nature of work. While advances promise to inject great value into the economy, they can also be the source of disruptions as new kinds of work are created and other types of work become less needed due to automation.

Discussions are rising on the best approaches to minimizing potential disruptions, making sure that the fruits of AI advances are widely shared and competition and innovation are encouraged and not stifled. We seek to study and understand best paths forward, and play a role in this discussion.

4 Collaborations Between People and AI Systems

A promising area of AI is the design of systems that augment the perception, cognition, and problem-solving abilities of people. Examples include the use of AI technologies to help physicians make more timely and accurate diagnoses and assistance provided to drivers of cars to help them to avoid dangerous situations and crashes.

Opportunities for R&D and for the development of best practices on AI-human collaboration include methods that provide people with clarity about the understandings and confidence that AI systems have about situations, means for coordinating human and AI contributions to problem solving, and enabling AI systems to work with people to resolve uncertainties about human goals.

5 Social and Societal Influences of AI

AI advances will touch people and society in numerous ways, including potential influences on privacy, democracy, criminal justice, and human rights. For example, while technologies that personalize information and that assist people with recommendations can provide people with valuable assistance, they could also inadvertently or deliberately manipulate people and influence opinions.

We seek to promote thoughtful collaboration and open dialogue about the potential subtle and salient influences of AI on people and society.

6 AI and Social Good

AI offers great potential for promoting the public good, for example in the realms of education, housing, public health, and sustainability. We see great value in collaborating with public and private organizations, including academia, scientific societies, NGOs, social entrepreneurs, and interested private citizens to promote discussions and catalyze efforts to address society's most pressing challenges.

Some of these projects may address deep societal challenges and will be moonshots - ambitious big bets that could have far-reaching impacts. Others may be creative ideas that could quickly produce positive results by harnessing AI advances.


15. Google (Jun 2018)
Objectives for AI applications (Pichai 2018)
(classified as a corporation)
7 / 50

We will assess AI applications in view of the following objectives. We believe that AI should:

1. Be socially beneficial.

The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

2. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

3. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

4. Be accountable to people.

We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

5. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

6. Uphold high standards of scientific excellence.

Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.

We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

7. Be made available for uses that accord with these principles.

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:

AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veteransâ healthcare, and search and rescue. These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

o--o--o--o--o--o--o--o

Google's announcement was met with immediate scepticism (Newcomer 2018): ""[With the exception of not working on "technologies whose principal purpose or implementation is to cause or directly facilitate injury to people], the rest of the company's "principles" are peppered with lawyerly hedging and vague commitments ... Without promising independent oversight, Google is just putting a new, less persuasive, spin on an old principle it's tried to bury: 'Don't be evil'".


16. IBM's Everyday Ethics for Artificial Intelligence (Sep 2018)
(IBM 2018)
(classified as a corporation)
7 / 50

1. Accountability

AI designers and developers are responsible for considering AI design, development, decision processes, and outcomes.

1.1. Make company policies clear and accessible to design and development teams from day one so that no one is confused about issues of responsibility or accountability. As an AI designer or developer, it is your responsibility to know.

1.2. Understand where the responsibility of the company/software ends. You may not have control over how data or a tool will be used by a user, client, or other external source.

1,3. Keep detailed records of your design processes and decision making. Determine a strategy for keeping records during the design and development process to encourage best practices and encourage iteration.

1.4. Adhere to your company's business conduct guidelines. Also, understand national and international laws, regulations, and guidelines5 that your AI may have to work within. You can find other related resources in the IEEE Ethically Aligned Design document.

2. Value Alignment

AI should be designed to align with the norms and values of your user group in mind.

2.1. Consider the culture that establishes the value systems you're designing within. Whenever possible, bring in policymakers and academics that can help your team articulate relevant perspectives.

2.2. Work with design researchers to understand and reflect your users' values.

2.3. Consider mapping out your understanding of your users' values and aligning the AI's actions accordingly with an Ethics Canvas. Values will be specific to certain use cases and affected communities. Alignment will allow users to better understand your AI's actions and intents.

3. Explainability

AI should be designed for humans to easily perceive, detect, and understand its decision process.

1.1. Allow for questions. A user should b eable to ask why an AI is doing what it's doing on an ongoing basis. This should be clear and up front in the user interface at all times.

1.2. Decision making processes must be reviewable, especially if the AI is working with highly sensitive personal information data like personally identifiable information, protected health information, and/or biometric data.

1.3. When an AI is assisting users with making any highly sensitive decisions, the AI must be able to provide them with a sufficient explanation of recommendations, the data used, and the reasoning behind the recommendations.

1.4. Teams should have and maintain accesstoa record of an AI's decision processes and be amenable to verification of those decision processes.

4. Fairness

AI must be designed to minimize bias and promote inclusive representation.

1.1. Real-time analysis of AI brings to light both intentional and unintentional biases. When bias in data becomes apparent, the team must investigate and understand where it originated and how it can be mitigated.

1.2. Design and develop without intentional biases and schedule team reviews to avoid unintentional biases. Unintentional biases can include stereotyping, confirmation bias, and sunk cost bias.

1.3. Instill a feedback mechanism or open dialogue with users to raise awareness of user-identified biases or issues. e.g., Woebot asks ÒLet me know what you think,Ó after suggesting a link.

5. User Data Rights

AI must be designed to protect user data and preserve the user's power over access and uses.

1.1. Users should always maintain control over what data is being used and in what context. They can deny access to personal data that they may find compromising or unfit for an AI to know or use.

1.2. Allow users to deny service or data by having the AI ask for permission before an interaction or providing the option during an interaction. Privacy settings and permissions should be clear, findable, and adjustable.

1.3. Provide full disclosure on how the personal information is being used or shared.

1.4. Users' data should be protected from theft, misuse, or data corruption.

1.5. Forbid use of another company's data without permission when creating a new AI service.

1.6. Recognize and adhere to applicable national and international rights laws when designing for an AI's acceptable user data access permissions.


17. The Public Voice's Universal Guidelines for AI (Oct 2018)
(TPV 2018)
(classified as a non-governmental organisation)
14 / 50

New developments in Artificial Intelligence are transforming the world, from science and industry to government administration and finance. The rise of AI decision-making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them.

We propose these Universal Guidelines to inform and improve the design and use of AI. The Guidelines are intended to maximize the benefits of AI, to minimize the risk, and to ensure the protection of human rights. These Guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems. We state clearly that the primary responsibility for AI systems must reside with those institutions that fund, develop, and deploy these systems.

  1. Right to Transparency. All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome.
  2. Right to Human Determination. All individuals have the right to a final determination made by a person.
  3. Identification Obligation. The institution responsible for an AI system must be made known to the public.
  4. Fairness Obligation. Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions.
  5. Assessment and Accountability Obligation. An AI system should only be deployed after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system.
  6. Accuracy, Reliability, and Validity Obligations. Institutions must ensure the accuracy, reliability, and validity of decisions.
  7. Data Quality Obligation. Institutions must establish data provenance, and assure quality and relevance for the data input into algorithms.
  8. Public Safety Obligation. Institutions must assess the public safety risks that arise from the deployment of AI systems that direct or control physical devices, and implement safety controls.
  9. Cybersecurity Obligation. Institutions must secure AI systems against cybersecurity threats.
  10. Prohibition on Secret Profiling. No institution shall establish or maintain a secret profiling system.
  11. Prohibition on Unitary Scoring. No national government shall establish or maintain a general-purpose score on its citizens or residents.
  12. Termination Obligation. An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible.

18. The EC's Draft Ethics Guidelines for Trustworthy AI (Nov 2018)
(EC 2018)
(classified as a governmental organisation)
29 / 50

Ethicality of Purpose is driven by the EU Charter of Fundamental Rights:

Ethical Principles

E1 Beneficence: Do Good

E2 Non maleficence: Do no Harm

E3 Autonomy: Preserve Human Agency

E4 Justice: Be Fair

E5 Explicability: Operate transparently

Requirements of Trustworthy AI (pp.14-18, 20-21 and 24-28)

Achieving Trustworthy AI means that the general and abstract principles need to be mapped into concrete requirements for AI systems and applications. The ten requirements listed below have been derived from the rights, principles and values of Chapter I. While they are all equally important, in different application domains and industries, the specific context needs to be taken into account for further handling thereof.

Key Guidance For Assessing Trustworthy AI

P0 Perform impact assessment (p.28)

P1 Accountability

P2 Data Governance

P3 Design for all

P4 Governance of AI Autonomy (Human oversight)

P5 Non-Discrimination

P6 Respect for (& Enhancement of) Human Autonomy [in final section, no. 7]

P7 Respect for Privacy [in final section, no. 6]

P8 Robustness

P9 Safety

P10 Transparency


19. Sony's AI Ethics Guidelines (Mar 2019)
(Sony 2019)
(classified as a corporation)
11 / 50

1. Supporting Creative Life Styles and Building a Better Society

Through advancing its AI-related R&D and promoting the utilization of AI in a manner harmonized with society, Sony aims to support the exploration of the potential for each individual to empower their lives, and to contribute to enrichment of our culture and push our civilization forward by providing novel and creative types of kando [cf. 0000inspiration]. Sony will engage in sustainable social development and endeavor to utilize the power of AI for contributing to global problem-solving and for the development of a peaceful and sustainable society.

2. Stakeholder Engagement

In order to solve the challenges arising from use of AI while striving for better AI utilization, Sony will seriously consider the interests and concerns of various stakeholders including its customers and creators, and proactively advance a dialogue with related industries, organizations, academic communities and more. For this purpose, Sony will construct the appropriate channels for ensuring that the content and results of these discussions are provided to officers and employees, including researchers and developers, who are involved in the corresponding businesses, as well as for ensuring further engagement with its various stakeholders.

3. Provision of Trusted Products and Services

Sony understands the need for safety when dealing with products and services utilizing AI and will continue to respond to security risks such as unauthorized access. AI systems may utilize statistical or probabilistic methods to achieve results. In the interest of Sony's customers and to maintain their trust, Sony will design whole systems with an awareness of the responsibility associated with the characteristics of such methods.

4. Privacy Protection

Sony, in compliance with laws and regulations as well as applicable internal rules and policies, seeks to enhance the security and protection of customers' personal data acquired via products and services utilizing AI, and build an environment where said personal data is processed in ways that respect the intention and trust of customers.

5. Respect for Fairness

In its utilization of AI, Sony will respect diversity and human rights of its customers and other stakeholders without any discrimination while striving to contribute to the resolution of social problems through its activities in its own and related industries.

6. Pursuit of Transparency

During the planning and design stages for its products and services that utilize AI, Sony will strive to introduce methods of capturing the reasoning behind the decisions made by AI utilized in said products and services. Additionally, it will endeavor to provide intelligible explanations and information to customers about the possible impact of using these products and services.

7. The Evolution of AI and Ongoing Education

People's lives have continuously changed with the advance in technology across history. Sony will be cognizant of the effects and impact of products and services that utilize AI on society and will proactively work to contribute to developing AI to create a better society and foster human talent capable of shaping our collective bright future through R&D and/or utilization of AI.


20. The Australian Government's Proposed Ethics Framework for AI (Apr 2019)
(CSIRO 2019), pp. 57-62
(classified as a governmental organisation)
13 / 50

The eight core principles referred to throughout this report are used as ethical framework to guide organisations in the use or development of AI systems. These principles should be seen as goals that define whether an AI system is operating ethically.

1. Generates net-benefits.
The AI system must generate benefits for people that are greater than the costs.

2. Do no harm.
Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes.

3. Regulatory and legal compliance.
The AI system must comply with all relevant international, Australian Local, State/Territory and Federal government obligations, regulations and laws.

4. Privacy protection.
Any system, including AI systems, must ensure people's private data is protected and kept confidential plus prevent data breaches which could cause reputational, psychological, financial, professional or other types of harm to a person.

5. Fairness.
The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the Òtraining dataÓ is free from bias or characteristics which may cause the algorithm to behave unfairly.

6. Transparency and explainability.
People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.

7. Contestability.
When an algorithm significantly impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.

8. Accountability.
People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm.


21. Microsoft's AI principles (April 2019)
(MS 2018)
(classified as a corporation)
5 / 50

Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values.

1. Fairness

AI systems should treat all people fairly

2. Inclusiveness

AI systems should empower everyone and engage people

3. Reliability & Safety

AI systems should perform reliably and safely

4. Transparency

AI systems should be understandable

5. Privacy & Security

AI systems should be secure and respect privacy

6. Accountability

AI systems should have algorithmic accountability


22. The EC's Ethics Guidelines for Trustworthy AI (Apr 2019)
(EC 2019)
(classified as a governmental organisation)
37 / 50

Ethical Principles in the Context of AI Systems (pp.12-13)

E1. Respect for human autonomy

The fundamental rights upon which the EU is founded are directed towards ensuring respect for the freedom and autonomy of human beings. Humans interacting with AI systems must be able to keep full and effective self- determination over themselves, and be able to partake in the democratic process. AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills. The allocation of functions between humans and AI systems should follow human-centric design principles and leave meaningful opportunity for human choice. This means securing human oversight over work processes in AI systems. AI systems may also fundamentally change the work sphere. It should support humans in the working environment, and aim for the creation of meaningful work.

E2. Prevention of harm

AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings. This entails the protection of human dignity as well as mental and physical integrity. AI systems and the environments in which they operate must be safe and secure. They must be technically robust and it should be ensured that they are not open to malicious use. Vulnerable persons should receive greater attention and be included in the development, deployment and use of AI systems. Particular attention must also be paid to situations where AI systems can cause or exacerbate adverse impacts due to asymmetries of power or information, such as between employers and employees, businesses and consumers or governments and citizens. Preventing harm also entails consideration of the natural environment and all living beings.

E3. Fairness

The development, deployment and use of AI systems must be fair. While we acknowledge that there are many different interpretations of fairness, we believe that fairness has both a substantive and a procedural dimension. The substantive dimension implies a commitment to: ensuring equal and just distribution of both benefits and costs, and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatisation. If unfairbiases can be avoided, AI systems could even increase societal fairness. Equal opportunity in terms of access to education, goods, services and technology should also be fostered. Moreover, the use of AI systems should never lead to people being deceived or unjustifiably impaired in their freedom of choice. Additionally, fairness implies that AI practitioners should respect the principle of proportionality between means and ends,and consider carefully how to balance competing interests and objectives. The procedural dimension of fairness entails the ability to contest and seek effective redress against decisions made by AI systems and by the humans operating them. In order to do so, the entity accountable for the decision must be identifiable, and the decision-making processes should be explicable.

E4. Explicability

Explicability is crucial for building and maintaining users' trust in AI systems. This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions - to the extent possible - explainable to those directly and indirectly affected. Without such information, a decision cannot be duly contested. An explanation as to why a model has generated a particular output or decision (and what combination of input factors contributed to that) is not always possible. These cases are referred to as 'blackbox' algorithms and require special attention. In those circumstances, other explicability measures (e.g. traceability, auditability and transparent communication on system capabilities) may be required, provided that the system as a whole respects fundamental rights. The degree to which explicability is needed is highly dependent on the context and the severity of the consequences if that output is erroneous or otherwise inaccurate.

Requirements of Trustworthy AI (pp.14-20, 26-31):

R1 Human agency and oversight (pp.15-16, 26-27)

Including fundamental rights, human agency and human oversight

AI systems should ... act as enablers to a democratic, flourishing and equitable society by supporting the user's agency and foster fundamental rights.
where ... risks exist, a fundamental rights impact assessment should be undertaken.
... mechanisms should be put into place to receive external feedback regarding AI systems that potentially infringe on fundamental rights.
... the right not to be subject to a decision based solely on automated processing when this produces legal effects on users or similarly significantly affects them.
Human oversight ... may be achieved through governance mechanisms such as a human-in-the- loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC) approach.
... the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.

R2 Technical robustness and safety (pp.16-17, 27-28)

Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility

A crucial component of achieving Trustworthy AI is technical robustness.

... requires that AI systems be developed with a preventative approach to risks and in a manner such that they reliably behave as intended while minimising unintentional and unexpected harm, and preventing unacceptable harm.

... protected against vulnerabilities ...

... steps should be taken to prevent and mitigate ... [unintended applications ... and potential abuse] ...

Resilience ... AI systems should have safeguards that enable a fallback plan in case of problems.

... it is crucial for safety measures to be developed and tested proactively.

It is critical that the results of AI systems are reproducible ... Reproducibility describes whether an AI experiment exhibits the same behaviour when repeated under the same conditions.

R3 Privacy and data governance (pp.17, 28)

Including respect for privacy, quality and integrity of data, and access to data

Prevention of harm to privacy also necessitates adequate data governance that covers the quality and integrity of the data used, its relevance in light of the domain in which the AI systems will be deployed, its access protocols and the capability to process data in a manner that protects privacy.

[The risk that data contains] socially constructed biases, inaccuracies, errors and mistakes ... needs to be addressed prior to training with any given data set.

Processes and data sets used must be tested and documented at each step such as planning, training, testing and deployment.

R4 Transparency (pp.18, 28-29)

Including traceability, explainability and communication

The data sets and the processes [and the decisions made by the AI system] ... should be documented to the best possible standard to allow for traceability ...

explanations of the degree to which an AI system influences and shapes the organisational decision-making process, design choices of the system, and the rationale for deploying it, should be available ...

Traceability facilitates auditability as well as explainability.

Technical explainability requires that the decisions made by an AI system can be understood and traced by human beings.

... explanation should be timely and adapted to the expertise of the stakeholder concerned ...

... humans have the right to be informed that they are interacting with an AI system.

... the option to decide against this interaction in favour of human interaction should be provided where needed to ensure compliance with fundamental rights.

R5 Diversity, non-discrimination and fairness (pp.18-19, 29-30)

Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation

... systems should be ... designed in a way that allows all people to use AI products or services, regardless of their age, gender, abilities or characteristics.

... consult stakeholders who may directly or indirectly be affected by the system throughout its life cycle ... ensuring ... information, consultation and participation throughout the whole process

... solicit regular feedback even after deployment

R6 Societal and environmental wellbeing (pp.19, 30-31)

Including sustainability and environmental friendliness, social impact, society and democracy

... the broader society, other sentient beings and the environment should be also considered as stakeholders throughout the AI system's life cycle.

Beyond assessing the impact of an AI system's development, deployment and use on individuals, this impact should also be assessed from a societal perspective, taking into account its effect on institutions, democracy and society at large ... including not only political decision-making but also electoral contexts.

R7 Accountability (pp.19-20, 31)

Including auditability, minimisation and reporting of negative impact, trade-offs and redress.

... mechanisms [must] be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use.

AI systems should be able to be independently audited .... Auditability entails the enablement of the assessment of algorithms, data and design processes.

The use of impact assessments ... both prior to and during the development, deployment and use of AI systems can be helpful to minimise negative impact. These assessments must be proportionate to the risk that the AI systems pose.

... trade-offs should be explicitly acknowledged and evaluated ... , reasoned and properly documented ...

In situations in which no ethically acceptable trade-offs can be identified, the development, deployment and use of the AI system should not proceed in that form.

The decision-maker must be accountable for the manner in which the appropriate trade-off is being made, and should continually review the appropriateness of the resulting decision to ensure that necessary changes can be made to the system where needed.

When unjust adverse impact occurs, accessible mechanisms should be foreseen that ensure adequate redress.


References

ACM (2017) 'Statement on Algorithmic Transparency and Accountability' Association for Computing Machinery, January 2017, at https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf

Asimov I. (1942) 'Runaround' (originally published in 1942), reprinted in Asimov I. 'I, Robot' Grafton Books, London, 1968, pp. 33- 51

BS (2016) 'Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems' British Standards Institute, 2016

Clarke R. (1993) 'Asimov's Laws of Robotics: Implications for Information Technology' In two parts, in IEEE Computer 26,12 (December 1993) 53-61, and 27,1 (January 1994) 57-66, at http://www.rogerclarke.com/SOS/Asimov.html

CLA-EP (2016) 'Recommendations on Civil Law Rules on Robotics' Committee on Legal Affairs of the European Parliament, 31 May 2016, at http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN

CSIRO (2019) 'Artificial Intelligence: Australia's Ethics Framework: A Discussion Paper' CSIRO, April 2019, at https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/supporting_documents/ArtificialIntelligenceethicsframeworkdiscussionpaper.pdf

Devlin H. (2016). 'Do no harm, don't discriminate: official guidance issued on robot ethics' The Guardian, 18 Sep 2016, at https://www.theguardian.com/technology/2016/sep/18/official-guidance-robot-ethics-british-standards-institute

EC (2018) 'Draft Ethics Guidelines for Trustworthy AI' High-Level Expert Group on Artificial Intelligence, European Commission, 18 December 2018, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=57112

EC (2019) 'Ethics Guidelines for Trustworthy AI' High-Level Expert Group on Artificial Intelligence, European Commission, April 2019, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=58477

FLI (2017) 'Asilomar AI Principles' Future of Life Institute, January 2017, at https://futureoflife.org/ai-principles/?cn-reloaded=1

GEFA (2016) 'Position on Robotics and AI' The Greens / European Free Alliance Digital Working Group, November 2016, at https://juliareda.eu/wp-content/uploads/2017/02/Green-Digital-Working-Group-Position-on-Robotics-and-Artificial-Intelligence-2016-11-22.pdf

Google (2018) 'Objectives for AI applications' Google, June 2018, at https://www.blog.google/technology/ai/ai-principles/

Hirano (2017) 'AI R&D guidelines' Proc. OECD Conf. on AI developments and applications, October 2017, http://www.oecd.org/going-digital/ai-intelligent-machines-smart-policies/conference-agenda/ai-intelligent-machines-smart-policies-hirano.pdf

HOL (2018) 'AI in the UK: ready, willing and able?' Select Committee on Artificial Intelligence, House of Lords, April 2018, at https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf

IBM (2018) 'Everyday Ethics for Artificial Intelligence' IBM, September 2018, at https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf

IEEE (2017) 'Ethically Aligned Design', Version 2. IEEE, December 2017. at http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html

ISOC (2017) 'Artificial Intelligence and Machine Learning: Policy Paper' Internet Society, April 2017, at https://www.internetsociety.org/resources/doc/2017/artificial-intelligence-and-machine-learning-policy-paper/

ITIC (2017) 'AI Policy Principles' Information Technology Industry Council, undated but apparently of October 2017, at https://www.itic.org/resources/AI-Policy-Principles-FullReport2.pdf

JSAI (2017) 'Ethical Guidelines' The Japanese Society for Artificial Intelligence, May 2017, at http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf

MS (2019) 'Microsoft AI Principles' Microsoft, undated but apparently of April 2019, at https://www.microsoft.com/en-us/ai/our-approach-to-ai

Newcomer E. (2018). 'What Google's AI Principles Left Out: We're in a golden age for hollow corporate statements sold as high-minded ethical treatises' Bloomberg, 8 June 2018, at https://www.bloomberg.com/news/articles/2018-06-08/what-google-s-ai-principles-left-out

Pichai S. (2018) 'AI at Google: our principles' Google Blog, 7 Jun 2018, at https://www.blog.google/technology/ai/ai-principles/

PoAI (2018) 'Our Work (Thematic Pillars)' Partnership on AI, April 2018, at https://www.partnershiponai.org/about/#pillar-1

Smith R. (2018). '5 core principles to keep AI ethical'. World Economic Forum, 19 Apr 2018, at https://www.weforum.org/agenda/2018/04/keep-calm-and-make-ai-ethical/

Sony (2019) ' Sony Group AI Ethics Guidelines' Sony, 1 Mar 2019, at https://www.sony.net/SonyInfo/csr_report/humanrights/hkrfmg0000007rtj-att/AI_Engagement_within_Sony_Group.pdf

TPV (2018) 'Universal Guidelines for Artificial Intelligence' The Public Voice, October 2018, at https://thepublicvoice.org/ai-universal-guidelines/

UGU (2017) 'Top 10 Principles for Ethical AI' UNI Global Union, December 2017, at http://www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University. He has also spent many years on the Board of the Australian Privacy Foundation, and is Company Secretary of the Internet Society of Australia.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 11 July 2018 - Last Amended: 26 August 2020 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/GAIP.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy